Oct 06 15:00:58 crc systemd[1]: Starting Kubernetes Kubelet... Oct 06 15:00:59 crc restorecon[4570]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:00:59 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 15:01:00 crc restorecon[4570]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 06 15:01:00 crc kubenswrapper[4888]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 15:01:00 crc kubenswrapper[4888]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 06 15:01:00 crc kubenswrapper[4888]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 15:01:00 crc kubenswrapper[4888]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 15:01:00 crc kubenswrapper[4888]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 06 15:01:00 crc kubenswrapper[4888]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.658510 4888 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667641 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667684 4888 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667693 4888 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667704 4888 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667713 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667722 4888 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667731 4888 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667760 4888 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667769 4888 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667778 4888 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667787 4888 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667826 4888 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667836 4888 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667846 4888 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667855 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667863 4888 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667870 4888 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667878 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667889 4888 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667900 4888 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667910 4888 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667925 4888 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667951 4888 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667964 4888 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667976 4888 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667986 4888 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.667996 4888 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668005 4888 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668016 4888 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668026 4888 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668036 4888 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668048 4888 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668059 4888 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668071 4888 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668085 4888 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668096 4888 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668106 4888 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668117 4888 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668136 4888 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668153 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668164 4888 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668176 4888 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668185 4888 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668198 4888 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668206 4888 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668214 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668222 4888 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668231 4888 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668239 4888 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668247 4888 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668254 4888 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668261 4888 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668270 4888 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668281 4888 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668291 4888 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668300 4888 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668309 4888 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668319 4888 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668328 4888 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668336 4888 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668343 4888 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668354 4888 feature_gate.go:330] unrecognized feature gate: Example Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668361 4888 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668369 4888 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668377 4888 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668384 4888 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668391 4888 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668399 4888 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668407 4888 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668415 4888 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.668422 4888 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669608 4888 flags.go:64] FLAG: --address="0.0.0.0" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669634 4888 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669649 4888 flags.go:64] FLAG: --anonymous-auth="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669663 4888 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669676 4888 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669685 4888 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669697 4888 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669708 4888 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669721 4888 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669731 4888 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669741 4888 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669751 4888 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669760 4888 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669770 4888 flags.go:64] FLAG: --cgroup-root="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669780 4888 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669790 4888 flags.go:64] FLAG: --client-ca-file="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669826 4888 flags.go:64] FLAG: --cloud-config="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669836 4888 flags.go:64] FLAG: --cloud-provider="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669845 4888 flags.go:64] FLAG: --cluster-dns="[]" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669858 4888 flags.go:64] FLAG: --cluster-domain="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669867 4888 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669876 4888 flags.go:64] FLAG: --config-dir="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669885 4888 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669894 4888 flags.go:64] FLAG: --container-log-max-files="5" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669906 4888 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669915 4888 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669924 4888 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669933 4888 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669942 4888 flags.go:64] FLAG: --contention-profiling="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669951 4888 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669961 4888 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669970 4888 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669979 4888 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669990 4888 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.669999 4888 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670008 4888 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670017 4888 flags.go:64] FLAG: --enable-load-reader="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670026 4888 flags.go:64] FLAG: --enable-server="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670034 4888 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670046 4888 flags.go:64] FLAG: --event-burst="100" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670055 4888 flags.go:64] FLAG: --event-qps="50" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670064 4888 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670073 4888 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670081 4888 flags.go:64] FLAG: --eviction-hard="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670094 4888 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670108 4888 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670117 4888 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670126 4888 flags.go:64] FLAG: --eviction-soft="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670137 4888 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670146 4888 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670155 4888 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670164 4888 flags.go:64] FLAG: --experimental-mounter-path="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670173 4888 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670181 4888 flags.go:64] FLAG: --fail-swap-on="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670190 4888 flags.go:64] FLAG: --feature-gates="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670201 4888 flags.go:64] FLAG: --file-check-frequency="20s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670210 4888 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670219 4888 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670228 4888 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670237 4888 flags.go:64] FLAG: --healthz-port="10248" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670246 4888 flags.go:64] FLAG: --help="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670256 4888 flags.go:64] FLAG: --hostname-override="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670264 4888 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670274 4888 flags.go:64] FLAG: --http-check-frequency="20s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670283 4888 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670292 4888 flags.go:64] FLAG: --image-credential-provider-config="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670333 4888 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670343 4888 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670353 4888 flags.go:64] FLAG: --image-service-endpoint="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670361 4888 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670371 4888 flags.go:64] FLAG: --kube-api-burst="100" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670380 4888 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670390 4888 flags.go:64] FLAG: --kube-api-qps="50" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670400 4888 flags.go:64] FLAG: --kube-reserved="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670409 4888 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670418 4888 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670427 4888 flags.go:64] FLAG: --kubelet-cgroups="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670437 4888 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670446 4888 flags.go:64] FLAG: --lock-file="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670454 4888 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670464 4888 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670473 4888 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670487 4888 flags.go:64] FLAG: --log-json-split-stream="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670495 4888 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670504 4888 flags.go:64] FLAG: --log-text-split-stream="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670514 4888 flags.go:64] FLAG: --logging-format="text" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670523 4888 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670533 4888 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670544 4888 flags.go:64] FLAG: --manifest-url="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670553 4888 flags.go:64] FLAG: --manifest-url-header="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670565 4888 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670575 4888 flags.go:64] FLAG: --max-open-files="1000000" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670587 4888 flags.go:64] FLAG: --max-pods="110" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670596 4888 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670606 4888 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670615 4888 flags.go:64] FLAG: --memory-manager-policy="None" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670625 4888 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670634 4888 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670644 4888 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670653 4888 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670675 4888 flags.go:64] FLAG: --node-status-max-images="50" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670684 4888 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670693 4888 flags.go:64] FLAG: --oom-score-adj="-999" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670702 4888 flags.go:64] FLAG: --pod-cidr="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670711 4888 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670726 4888 flags.go:64] FLAG: --pod-manifest-path="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670734 4888 flags.go:64] FLAG: --pod-max-pids="-1" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670744 4888 flags.go:64] FLAG: --pods-per-core="0" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670753 4888 flags.go:64] FLAG: --port="10250" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670764 4888 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670773 4888 flags.go:64] FLAG: --provider-id="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670782 4888 flags.go:64] FLAG: --qos-reserved="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670791 4888 flags.go:64] FLAG: --read-only-port="10255" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670833 4888 flags.go:64] FLAG: --register-node="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670842 4888 flags.go:64] FLAG: --register-schedulable="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670851 4888 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670866 4888 flags.go:64] FLAG: --registry-burst="10" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670875 4888 flags.go:64] FLAG: --registry-qps="5" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670884 4888 flags.go:64] FLAG: --reserved-cpus="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670892 4888 flags.go:64] FLAG: --reserved-memory="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670903 4888 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670912 4888 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670922 4888 flags.go:64] FLAG: --rotate-certificates="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670932 4888 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670942 4888 flags.go:64] FLAG: --runonce="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670951 4888 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670961 4888 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670970 4888 flags.go:64] FLAG: --seccomp-default="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670979 4888 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670988 4888 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.670997 4888 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671006 4888 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671015 4888 flags.go:64] FLAG: --storage-driver-password="root" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671023 4888 flags.go:64] FLAG: --storage-driver-secure="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671032 4888 flags.go:64] FLAG: --storage-driver-table="stats" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671041 4888 flags.go:64] FLAG: --storage-driver-user="root" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671050 4888 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671060 4888 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671069 4888 flags.go:64] FLAG: --system-cgroups="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671079 4888 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671092 4888 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671102 4888 flags.go:64] FLAG: --tls-cert-file="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671111 4888 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671122 4888 flags.go:64] FLAG: --tls-min-version="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671131 4888 flags.go:64] FLAG: --tls-private-key-file="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671139 4888 flags.go:64] FLAG: --topology-manager-policy="none" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671148 4888 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671157 4888 flags.go:64] FLAG: --topology-manager-scope="container" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671166 4888 flags.go:64] FLAG: --v="2" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671178 4888 flags.go:64] FLAG: --version="false" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671190 4888 flags.go:64] FLAG: --vmodule="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671201 4888 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.671210 4888 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671414 4888 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671426 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671436 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671444 4888 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671452 4888 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671460 4888 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671467 4888 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671476 4888 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671484 4888 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671494 4888 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671504 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671512 4888 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671521 4888 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671529 4888 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671537 4888 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671545 4888 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671553 4888 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671562 4888 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671569 4888 feature_gate.go:330] unrecognized feature gate: Example Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671577 4888 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671585 4888 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671593 4888 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671600 4888 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671608 4888 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671616 4888 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671624 4888 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671631 4888 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671639 4888 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671647 4888 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671655 4888 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671663 4888 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671670 4888 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671678 4888 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671686 4888 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671693 4888 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671701 4888 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671708 4888 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671718 4888 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671729 4888 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671738 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671746 4888 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671754 4888 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671761 4888 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671770 4888 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671779 4888 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671787 4888 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671819 4888 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671827 4888 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671836 4888 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671843 4888 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671851 4888 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671858 4888 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671866 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671874 4888 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671881 4888 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671891 4888 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671901 4888 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671909 4888 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671917 4888 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671928 4888 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671939 4888 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671950 4888 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671959 4888 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671970 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671980 4888 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671987 4888 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.671995 4888 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.672004 4888 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.672013 4888 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.672022 4888 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.672029 4888 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.672062 4888 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.683698 4888 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.683765 4888 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683905 4888 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683919 4888 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683925 4888 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683931 4888 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683937 4888 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683942 4888 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683946 4888 feature_gate.go:330] unrecognized feature gate: Example Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683951 4888 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683958 4888 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683963 4888 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683967 4888 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683973 4888 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683980 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683984 4888 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683990 4888 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683995 4888 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.683999 4888 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684004 4888 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684013 4888 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684019 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684024 4888 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684029 4888 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684034 4888 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684039 4888 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684045 4888 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684050 4888 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684054 4888 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684058 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684063 4888 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684067 4888 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684071 4888 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684076 4888 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684081 4888 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684085 4888 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684091 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684096 4888 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684101 4888 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684106 4888 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684110 4888 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684115 4888 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684119 4888 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684124 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684128 4888 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684133 4888 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684137 4888 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684141 4888 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684145 4888 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684149 4888 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684154 4888 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684159 4888 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684164 4888 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684168 4888 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684172 4888 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684176 4888 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684181 4888 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684185 4888 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684189 4888 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684193 4888 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684198 4888 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684202 4888 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684206 4888 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684210 4888 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684215 4888 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684219 4888 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684223 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684227 4888 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684232 4888 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684236 4888 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684241 4888 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684245 4888 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684252 4888 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.684262 4888 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684425 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684436 4888 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684440 4888 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684445 4888 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684450 4888 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684455 4888 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684459 4888 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684464 4888 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684468 4888 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684472 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684477 4888 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684482 4888 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684486 4888 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684491 4888 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684495 4888 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684499 4888 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684504 4888 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684508 4888 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684514 4888 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684520 4888 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684524 4888 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684529 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684536 4888 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684541 4888 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684546 4888 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684550 4888 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684555 4888 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684560 4888 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684564 4888 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684568 4888 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684572 4888 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684577 4888 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684582 4888 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684586 4888 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684592 4888 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684597 4888 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684601 4888 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684605 4888 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684609 4888 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684613 4888 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684619 4888 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684624 4888 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684628 4888 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684632 4888 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684636 4888 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684641 4888 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684645 4888 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684649 4888 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684653 4888 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684657 4888 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684661 4888 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684665 4888 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684669 4888 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684673 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684678 4888 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684683 4888 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684687 4888 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684692 4888 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684696 4888 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684701 4888 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684705 4888 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684709 4888 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684715 4888 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684720 4888 feature_gate.go:330] unrecognized feature gate: Example Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684725 4888 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684730 4888 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684734 4888 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684738 4888 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684742 4888 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684747 4888 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.684760 4888 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.684769 4888 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.685879 4888 server.go:940] "Client rotation is on, will bootstrap in background" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.695453 4888 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.695621 4888 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.699616 4888 server.go:997] "Starting client certificate rotation" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.699682 4888 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.700853 4888 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-08 02:35:09.168769743 +0000 UTC Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.701063 4888 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1499h34m8.467755605s for next certificate rotation Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.735105 4888 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.738645 4888 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.758340 4888 log.go:25] "Validated CRI v1 runtime API" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.797538 4888 log.go:25] "Validated CRI v1 image API" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.799710 4888 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.806764 4888 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-06-14-41-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.806825 4888 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.820353 4888 manager.go:217] Machine: {Timestamp:2025-10-06 15:01:00.817758132 +0000 UTC m=+0.630108880 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799886 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f107361e-9ed9-4a24-a32e-a76cb5e92926 BootID:be6bc275-7f5d-4ec6-b349-88bdcff88efc Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7d:83:3a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7d:83:3a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:2b:84:de Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c5:84:4a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:51:2c:ba Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ba:70:65 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e6:97:37:b9:67:6a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8a:81:f6:fb:30:a6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.820604 4888 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.820733 4888 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.822741 4888 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.822984 4888 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.823027 4888 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.824840 4888 topology_manager.go:138] "Creating topology manager with none policy" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.824861 4888 container_manager_linux.go:303] "Creating device plugin manager" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.825636 4888 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.825667 4888 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.825841 4888 state_mem.go:36] "Initialized new in-memory state store" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.825922 4888 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.830570 4888 kubelet.go:418] "Attempting to sync node with API server" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.830596 4888 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.830622 4888 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.830637 4888 kubelet.go:324] "Adding apiserver pod source" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.830653 4888 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.841148 4888 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.842717 4888 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.843965 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.844139 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.845058 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.845171 4888 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.845296 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846773 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846822 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846833 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846842 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846856 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846865 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846873 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846888 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846898 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846907 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846925 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.846935 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.847572 4888 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.848217 4888 server.go:1280] "Started kubelet" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.849099 4888 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.849351 4888 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.849325 4888 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 06 15:01:00 crc systemd[1]: Started Kubernetes Kubelet. Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.850855 4888 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.850890 4888 server.go:460] "Adding debug handlers to kubelet server" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.853023 4888 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.853081 4888 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.853235 4888 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:28:51.028758259 +0000 UTC Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.853301 4888 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1129h27m50.175460755s for next certificate rotation Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.853624 4888 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.853671 4888 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.853990 4888 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.853720 4888 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.854603 4888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.120:6443: connect: connection refused" interval="200ms" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.855035 4888 factory.go:55] Registering systemd factory Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.855511 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.855562 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.855695 4888 factory.go:221] Registration of the systemd container factory successfully Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.857923 4888 factory.go:153] Registering CRI-O factory Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.858147 4888 factory.go:221] Registration of the crio container factory successfully Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.858660 4888 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.858737 4888 factory.go:103] Registering Raw factory Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.858786 4888 manager.go:1196] Started watching for new ooms in manager Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.855265 4888 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.120:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186beeff91c7e328 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-06 15:01:00.84817796 +0000 UTC m=+0.660528688,LastTimestamp:2025-10-06 15:01:00.84817796 +0000 UTC m=+0.660528688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.859819 4888 manager.go:319] Starting recovery of all containers Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872506 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872591 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872606 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872625 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872639 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872650 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872663 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872719 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872744 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872759 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872771 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872785 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872811 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872826 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872838 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872853 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872864 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872877 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872887 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872898 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872911 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872922 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872935 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872946 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872957 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872971 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.872987 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873000 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873015 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873026 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873036 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873051 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873065 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873080 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873091 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873104 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873125 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873136 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873152 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873165 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873183 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873199 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873217 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873232 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873242 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873254 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873273 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873285 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873298 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873310 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873320 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873337 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873360 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873374 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873389 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873404 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873416 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873430 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873439 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873452 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873465 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873476 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873485 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873499 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873508 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873522 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873532 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873541 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873553 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873562 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873575 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873586 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873598 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873612 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873623 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873635 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873645 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873655 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873668 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873685 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873701 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873789 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873815 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873826 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873841 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873856 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873871 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873886 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873897 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873911 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873920 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873934 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873946 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873963 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873977 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.873987 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874003 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874014 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874024 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874036 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874049 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874084 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874098 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874108 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874129 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874147 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874161 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874173 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874190 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874201 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874216 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874233 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874245 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874259 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874271 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874305 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874316 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874333 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874345 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874356 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874370 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874381 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874394 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874406 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874417 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874431 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874441 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874463 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874479 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874496 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874514 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874527 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874543 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874553 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874567 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874594 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874608 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874627 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874638 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874649 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874662 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874677 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874707 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874724 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874734 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874755 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874777 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874827 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874843 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874854 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.874866 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875014 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875047 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875057 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875080 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875093 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875105 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875118 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875128 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875142 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875155 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875245 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875287 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875305 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875315 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875325 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875398 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875440 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.875482 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877461 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877502 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877514 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877524 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877535 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877542 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877552 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877560 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877568 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877578 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877586 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877596 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877605 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877614 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877625 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877634 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877643 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877653 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877663 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.877676 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878633 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878661 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878674 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878688 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878699 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878709 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878718 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878727 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878738 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878746 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878756 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878765 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878774 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878784 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878806 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878815 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878842 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.878852 4888 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.883061 4888 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.883106 4888 reconstruct.go:97] "Volume reconstruction finished" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.883120 4888 reconciler.go:26] "Reconciler: start to sync state" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.885832 4888 manager.go:324] Recovery completed Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.895291 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.897034 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.897068 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.897080 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.897852 4888 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.897884 4888 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.897946 4888 state_mem.go:36] "Initialized new in-memory state store" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.915605 4888 policy_none.go:49] "None policy: Start" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.916414 4888 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.916444 4888 state_mem.go:35] "Initializing new in-memory state store" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.917643 4888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.919922 4888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.919973 4888 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.920062 4888 kubelet.go:2335] "Starting kubelet main sync loop" Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.920136 4888 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 06 15:01:00 crc kubenswrapper[4888]: W1006 15:01:00.922941 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.923033 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.954095 4888 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.961859 4888 manager.go:334] "Starting Device Plugin manager" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.961982 4888 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.961997 4888 server.go:79] "Starting device plugin registration server" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.962327 4888 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.962342 4888 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.963069 4888 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.963190 4888 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 06 15:01:00 crc kubenswrapper[4888]: I1006 15:01:00.963198 4888 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 06 15:01:00 crc kubenswrapper[4888]: E1006 15:01:00.970753 4888 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.021079 4888 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.021157 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.022098 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.022154 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.022164 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.022272 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.022421 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.022470 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023226 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023249 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023262 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023283 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023308 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023316 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023453 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023581 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.023601 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024188 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024212 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024220 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024356 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024371 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024379 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024489 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024581 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.024610 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025269 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025293 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025303 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025327 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025342 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025350 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025421 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025553 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.025581 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.026216 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.026245 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.026222 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.026256 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.026264 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.026272 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.026414 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.026440 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.027009 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.027036 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.027047 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: E1006 15:01:01.055169 4888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.120:6443: connect: connection refused" interval="400ms" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.062709 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.063620 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.063647 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.063655 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.063674 4888 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 15:01:01 crc kubenswrapper[4888]: E1006 15:01:01.064010 4888 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.120:6443: connect: connection refused" node="crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.084979 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085020 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085039 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085057 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085126 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085171 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085244 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085311 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085333 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085375 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085396 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085410 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085431 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085445 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.085489 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186508 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186571 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186589 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186604 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186618 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186651 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186665 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186678 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186694 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186729 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186746 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186761 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186756 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186817 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186748 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186844 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186861 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186891 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186755 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186903 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186870 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186892 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186864 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186775 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186852 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.186918 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.187027 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.187110 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.187184 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.187277 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.264113 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.266444 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.266505 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.266524 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.266557 4888 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 15:01:01 crc kubenswrapper[4888]: E1006 15:01:01.267111 4888 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.120:6443: connect: connection refused" node="crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.344579 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.357424 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.370772 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.377892 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.381953 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:01 crc kubenswrapper[4888]: W1006 15:01:01.398150 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-588e921f3334b695cfa5b61e6ab9b7e3fd6e20e2ecc86270dacad7958e67cbe6 WatchSource:0}: Error finding container 588e921f3334b695cfa5b61e6ab9b7e3fd6e20e2ecc86270dacad7958e67cbe6: Status 404 returned error can't find the container with id 588e921f3334b695cfa5b61e6ab9b7e3fd6e20e2ecc86270dacad7958e67cbe6 Oct 06 15:01:01 crc kubenswrapper[4888]: W1006 15:01:01.400130 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-cc7eb341ba87001f867c6c01d1bd6c0d28e87c62de85e3224a40375cef9c4208 WatchSource:0}: Error finding container cc7eb341ba87001f867c6c01d1bd6c0d28e87c62de85e3224a40375cef9c4208: Status 404 returned error can't find the container with id cc7eb341ba87001f867c6c01d1bd6c0d28e87c62de85e3224a40375cef9c4208 Oct 06 15:01:01 crc kubenswrapper[4888]: W1006 15:01:01.405436 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a464494fe636bc7752b033a618410855651b38f1f214a6d8180d730822f5fdc8 WatchSource:0}: Error finding container a464494fe636bc7752b033a618410855651b38f1f214a6d8180d730822f5fdc8: Status 404 returned error can't find the container with id a464494fe636bc7752b033a618410855651b38f1f214a6d8180d730822f5fdc8 Oct 06 15:01:01 crc kubenswrapper[4888]: W1006 15:01:01.408441 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7a6645f78efa82760fa4dc99313ff0498ef823b5b8e1135e421ca33fee9b17de WatchSource:0}: Error finding container 7a6645f78efa82760fa4dc99313ff0498ef823b5b8e1135e421ca33fee9b17de: Status 404 returned error can't find the container with id 7a6645f78efa82760fa4dc99313ff0498ef823b5b8e1135e421ca33fee9b17de Oct 06 15:01:01 crc kubenswrapper[4888]: W1006 15:01:01.410641 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4ba69e6d9b07174bf6c61e7bef3d8031b4f99fdc9e5277ba43226cbd353127bd WatchSource:0}: Error finding container 4ba69e6d9b07174bf6c61e7bef3d8031b4f99fdc9e5277ba43226cbd353127bd: Status 404 returned error can't find the container with id 4ba69e6d9b07174bf6c61e7bef3d8031b4f99fdc9e5277ba43226cbd353127bd Oct 06 15:01:01 crc kubenswrapper[4888]: E1006 15:01:01.455756 4888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.120:6443: connect: connection refused" interval="800ms" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.667416 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.669723 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.669770 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.669782 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.669834 4888 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 15:01:01 crc kubenswrapper[4888]: E1006 15:01:01.670264 4888 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.120:6443: connect: connection refused" node="crc" Oct 06 15:01:01 crc kubenswrapper[4888]: W1006 15:01:01.764123 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:01 crc kubenswrapper[4888]: E1006 15:01:01.764219 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:01 crc kubenswrapper[4888]: W1006 15:01:01.804429 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:01 crc kubenswrapper[4888]: E1006 15:01:01.804517 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.850697 4888 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.928813 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7a6645f78efa82760fa4dc99313ff0498ef823b5b8e1135e421ca33fee9b17de"} Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.930225 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a464494fe636bc7752b033a618410855651b38f1f214a6d8180d730822f5fdc8"} Oct 06 15:01:01 crc kubenswrapper[4888]: W1006 15:01:01.930349 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:01 crc kubenswrapper[4888]: E1006 15:01:01.930412 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.931097 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cc7eb341ba87001f867c6c01d1bd6c0d28e87c62de85e3224a40375cef9c4208"} Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.932712 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"588e921f3334b695cfa5b61e6ab9b7e3fd6e20e2ecc86270dacad7958e67cbe6"} Oct 06 15:01:01 crc kubenswrapper[4888]: I1006 15:01:01.936608 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4ba69e6d9b07174bf6c61e7bef3d8031b4f99fdc9e5277ba43226cbd353127bd"} Oct 06 15:01:02 crc kubenswrapper[4888]: W1006 15:01:02.174855 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:02 crc kubenswrapper[4888]: E1006 15:01:02.174939 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:02 crc kubenswrapper[4888]: E1006 15:01:02.257065 4888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.120:6443: connect: connection refused" interval="1.6s" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.470770 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.472511 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.472549 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.472583 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.472607 4888 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 15:01:02 crc kubenswrapper[4888]: E1006 15:01:02.473147 4888 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.120:6443: connect: connection refused" node="crc" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.850480 4888 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.941563 4888 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b" exitCode=0 Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.941631 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b"} Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.941752 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.942985 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.943004 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.943012 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.946051 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f"} Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.946125 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329"} Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.946151 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5"} Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.946173 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9"} Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.946318 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.947525 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.947569 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.947587 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.950096 4888 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee" exitCode=0 Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.950181 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee"} Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.950296 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.951235 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.951283 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.951305 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.952941 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.953739 4888 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="03634e4c28b75608db90959e12c973b22eed83b0ed3e7648581f7f74ce402ac0" exitCode=0 Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.953834 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"03634e4c28b75608db90959e12c973b22eed83b0ed3e7648581f7f74ce402ac0"} Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.953970 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.954611 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.954642 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.954658 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.954867 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.954889 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.954897 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.956347 4888 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9c7794b5a198fa819961d77fe83dba20bd4bf89b342fb6030c2897109f8865f7" exitCode=0 Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.956381 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9c7794b5a198fa819961d77fe83dba20bd4bf89b342fb6030c2897109f8865f7"} Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.956465 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.962116 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.962139 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:02 crc kubenswrapper[4888]: I1006 15:01:02.962147 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.850086 4888 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:03 crc kubenswrapper[4888]: E1006 15:01:03.857637 4888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.120:6443: connect: connection refused" interval="3.2s" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.961495 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fae7e669c7a8b9c64247ae096f7903bde47dec2a619368865ddd801e54bf4ed8"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.961608 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.962819 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.962858 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.962870 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.965328 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.965381 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.965389 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.965397 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.966518 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.966559 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.966573 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.970754 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a77152d1ea878f15c50975bfba7588111d950be35ca933a5b48b18eac44a9923"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.970817 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.970839 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.970860 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.970874 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.970889 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.971692 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.971723 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.971732 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.972665 4888 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7ec070920d247cffacae1c178814e6cc3258b28960dbc7517f9bf12961854e6c" exitCode=0 Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.972697 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7ec070920d247cffacae1c178814e6cc3258b28960dbc7517f9bf12961854e6c"} Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.972760 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.972808 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.973585 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.973611 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.973593 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.973638 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.973648 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:03 crc kubenswrapper[4888]: I1006 15:01:03.973621 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:04 crc kubenswrapper[4888]: W1006 15:01:04.035220 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:04 crc kubenswrapper[4888]: E1006 15:01:04.035390 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.074153 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.080274 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.080345 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.080357 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.080381 4888 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 15:01:04 crc kubenswrapper[4888]: E1006 15:01:04.080780 4888 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.120:6443: connect: connection refused" node="crc" Oct 06 15:01:04 crc kubenswrapper[4888]: W1006 15:01:04.410182 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:04 crc kubenswrapper[4888]: E1006 15:01:04.410308 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.419978 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.420195 4888 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.420244 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Oct 06 15:01:04 crc kubenswrapper[4888]: W1006 15:01:04.455447 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.120:6443: connect: connection refused Oct 06 15:01:04 crc kubenswrapper[4888]: E1006 15:01:04.455534 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.120:6443: connect: connection refused" logger="UnhandledError" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.750445 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.760216 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.977032 4888 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6c159b021b0da99aac55a7f6eb94e58cc7594a52f74e594857360106085346e3" exitCode=0 Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.977210 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.977275 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.977349 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.977397 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.977770 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6c159b021b0da99aac55a7f6eb94e58cc7594a52f74e594857360106085346e3"} Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.977917 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.978241 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.978334 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.978390 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.978600 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.978921 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.978992 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.979055 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.979029 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.979224 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.979238 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.978969 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.979469 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.979488 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.980167 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.981258 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.981286 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:04 crc kubenswrapper[4888]: I1006 15:01:04.981302 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.104037 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.984733 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8325c8ebd07004724ed07283f70f31bdc17b84a3fb0ee95a3d2974834103f508"} Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.984826 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.984845 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8828a76e3dafa285448a26b68f7820c36e630cc911ee47cdf431b132bbe0fd52"} Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.984768 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.984878 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e986a98af4b511deed5d66730c7d13632467e289d41f2173e77baa133ce776e9"} Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.984989 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"25316b5f97af843636a21d8fd51d4016e60de1f56618f1d2339fe02c257c4d88"} Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.985008 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fe44321f0742e023d334748837be350d37b1516543279750a43e228862d34288"} Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.984834 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.985035 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.985380 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.985685 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.985705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.985712 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986257 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986282 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986291 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986315 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986334 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986345 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986278 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986422 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:05 crc kubenswrapper[4888]: I1006 15:01:05.986432 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:06 crc kubenswrapper[4888]: I1006 15:01:06.987367 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:06 crc kubenswrapper[4888]: I1006 15:01:06.987398 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:06 crc kubenswrapper[4888]: I1006 15:01:06.989032 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:06 crc kubenswrapper[4888]: I1006 15:01:06.989059 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:06 crc kubenswrapper[4888]: I1006 15:01:06.989081 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:06 crc kubenswrapper[4888]: I1006 15:01:06.989156 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:06 crc kubenswrapper[4888]: I1006 15:01:06.989112 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:06 crc kubenswrapper[4888]: I1006 15:01:06.989196 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.281831 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.283141 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.283172 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.283187 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.283213 4888 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.433460 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.433616 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.433652 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.434527 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.434555 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.434566 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.667437 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.990631 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.992080 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.992136 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:07 crc kubenswrapper[4888]: I1006 15:01:07.992168 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:08 crc kubenswrapper[4888]: I1006 15:01:08.199233 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:08 crc kubenswrapper[4888]: I1006 15:01:08.199482 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:08 crc kubenswrapper[4888]: I1006 15:01:08.200768 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:08 crc kubenswrapper[4888]: I1006 15:01:08.200830 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:08 crc kubenswrapper[4888]: I1006 15:01:08.200840 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:09 crc kubenswrapper[4888]: I1006 15:01:09.829745 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 06 15:01:09 crc kubenswrapper[4888]: I1006 15:01:09.830027 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:09 crc kubenswrapper[4888]: I1006 15:01:09.831635 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:09 crc kubenswrapper[4888]: I1006 15:01:09.831693 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:09 crc kubenswrapper[4888]: I1006 15:01:09.831706 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:10 crc kubenswrapper[4888]: I1006 15:01:10.279389 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:10 crc kubenswrapper[4888]: I1006 15:01:10.279655 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:10 crc kubenswrapper[4888]: I1006 15:01:10.281303 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:10 crc kubenswrapper[4888]: I1006 15:01:10.281381 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:10 crc kubenswrapper[4888]: I1006 15:01:10.281407 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:10 crc kubenswrapper[4888]: I1006 15:01:10.668671 4888 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 15:01:10 crc kubenswrapper[4888]: I1006 15:01:10.668752 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 15:01:10 crc kubenswrapper[4888]: E1006 15:01:10.970866 4888 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 06 15:01:14 crc kubenswrapper[4888]: W1006 15:01:14.594743 4888 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 06 15:01:14 crc kubenswrapper[4888]: I1006 15:01:14.594864 4888 trace.go:236] Trace[418471118]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 15:01:04.593) (total time: 10001ms): Oct 06 15:01:14 crc kubenswrapper[4888]: Trace[418471118]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:01:14.594) Oct 06 15:01:14 crc kubenswrapper[4888]: Trace[418471118]: [10.001656757s] [10.001656757s] END Oct 06 15:01:14 crc kubenswrapper[4888]: E1006 15:01:14.594893 4888 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 06 15:01:14 crc kubenswrapper[4888]: I1006 15:01:14.851511 4888 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.012734 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.018101 4888 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a77152d1ea878f15c50975bfba7588111d950be35ca933a5b48b18eac44a9923" exitCode=255 Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.018155 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a77152d1ea878f15c50975bfba7588111d950be35ca933a5b48b18eac44a9923"} Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.018315 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.019018 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.019050 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.019066 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.019504 4888 scope.go:117] "RemoveContainer" containerID="a77152d1ea878f15c50975bfba7588111d950be35ca933a5b48b18eac44a9923" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.108825 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.108933 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.110513 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.110553 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.110565 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.248065 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.334111 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.334317 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.336157 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.336224 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.336239 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.368585 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.673947 4888 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.674037 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.689129 4888 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 06 15:01:15 crc kubenswrapper[4888]: I1006 15:01:15.689232 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.022452 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.024183 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a"} Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.024262 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.024273 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.025230 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.025259 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.025272 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.026002 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.026025 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.026035 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:16 crc kubenswrapper[4888]: I1006 15:01:16.037929 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.026839 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.026937 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.026988 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.028236 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.028315 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.028329 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.028592 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.028639 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:17 crc kubenswrapper[4888]: I1006 15:01:17.028651 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:18 crc kubenswrapper[4888]: I1006 15:01:18.032346 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:18 crc kubenswrapper[4888]: I1006 15:01:18.033829 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:18 crc kubenswrapper[4888]: I1006 15:01:18.033947 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:18 crc kubenswrapper[4888]: I1006 15:01:18.034039 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:19 crc kubenswrapper[4888]: I1006 15:01:19.425147 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:19 crc kubenswrapper[4888]: I1006 15:01:19.425346 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:19 crc kubenswrapper[4888]: I1006 15:01:19.426311 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:19 crc kubenswrapper[4888]: I1006 15:01:19.426344 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:19 crc kubenswrapper[4888]: I1006 15:01:19.426353 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:19 crc kubenswrapper[4888]: I1006 15:01:19.429159 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.036196 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.037107 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.037145 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.037153 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.292946 4888 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.668552 4888 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.668625 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.679165 4888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.680339 4888 trace.go:236] Trace[96036104]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 15:01:08.898) (total time: 11781ms): Oct 06 15:01:20 crc kubenswrapper[4888]: Trace[96036104]: ---"Objects listed" error: 11781ms (15:01:20.680) Oct 06 15:01:20 crc kubenswrapper[4888]: Trace[96036104]: [11.781348192s] [11.781348192s] END Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.680362 4888 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.680936 4888 trace.go:236] Trace[38038441]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 15:01:08.574) (total time: 12106ms): Oct 06 15:01:20 crc kubenswrapper[4888]: Trace[38038441]: ---"Objects listed" error: 12106ms (15:01:20.680) Oct 06 15:01:20 crc kubenswrapper[4888]: Trace[38038441]: [12.106347494s] [12.106347494s] END Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.681064 4888 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.681628 4888 trace.go:236] Trace[1555395827]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 15:01:09.310) (total time: 11370ms): Oct 06 15:01:20 crc kubenswrapper[4888]: Trace[1555395827]: ---"Objects listed" error: 11370ms (15:01:20.681) Oct 06 15:01:20 crc kubenswrapper[4888]: Trace[1555395827]: [11.370734898s] [11.370734898s] END Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.681661 4888 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.681731 4888 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.681961 4888 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.842752 4888 apiserver.go:52] "Watching apiserver" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.844516 4888 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.844740 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.845121 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.845202 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.845295 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.845390 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.845471 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.846352 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.846412 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.846469 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.846511 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.847518 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.848421 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.849482 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.849546 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.850664 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.850739 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.850738 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.850904 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.855964 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.856534 4888 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.873081 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883210 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883266 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883294 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883325 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883346 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883367 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883390 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883409 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883429 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883449 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883469 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883491 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883512 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883530 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883548 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883564 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883584 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883607 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883627 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883633 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883651 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883671 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883695 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883718 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883741 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883763 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883785 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883829 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883851 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883871 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883888 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883907 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883925 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883943 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883962 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883980 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884006 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884034 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884058 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884109 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884132 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884150 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884169 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884188 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884226 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884278 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884297 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884315 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884334 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884353 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884371 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884390 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884411 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884435 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884462 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884484 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884529 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884574 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884601 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884622 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884647 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884670 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884712 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884741 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884767 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884836 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884869 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884894 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884919 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884956 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885019 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885046 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885072 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885095 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885115 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885136 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885161 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885186 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885209 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885233 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885255 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885278 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885301 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885412 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885447 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885471 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885495 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885517 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885539 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885561 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885596 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885621 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885644 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885666 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885690 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885713 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885740 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885764 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885789 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885831 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885856 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885877 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885897 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885918 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885938 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885960 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885982 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886005 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886027 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886048 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886068 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886090 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886114 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886137 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886167 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886190 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886214 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886236 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886260 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886288 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886312 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886338 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886365 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886389 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886416 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886438 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886461 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886486 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886509 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886534 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886558 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886583 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886612 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886637 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886662 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886685 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886707 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886731 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886764 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886789 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886832 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886859 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886882 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886905 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886927 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886951 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886974 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886998 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887021 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887042 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887067 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887091 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887115 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887138 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887199 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887227 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887252 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887277 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887301 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887328 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887547 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887569 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887593 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887617 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887643 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887666 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887691 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887716 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887742 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887767 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887794 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887837 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887864 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887889 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887914 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887938 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887964 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887991 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888016 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888044 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888070 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888093 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888117 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888142 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888167 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888191 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888218 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.883991 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888243 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884019 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888269 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888294 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888320 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888344 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888370 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888398 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888423 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.904095 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-rwfbx"] Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.904442 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rwfbx" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.905187 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884368 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884466 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884714 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884763 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.884918 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885122 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885238 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885243 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885254 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885402 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885454 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885592 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885651 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885756 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885777 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885925 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.885940 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886068 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886097 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886134 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886185 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886289 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.908166 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.908178 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886326 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886321 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886455 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886556 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886593 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886602 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886728 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.886756 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887190 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887221 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887741 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.887819 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888018 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888114 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888157 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888195 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888224 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888335 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888361 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888415 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.888486 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:01:21.388430961 +0000 UTC m=+21.200781729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888685 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888737 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888867 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.888973 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.889072 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.889080 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.889180 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.889258 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.889284 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.889552 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.889746 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.889917 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.890065 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.890229 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.890728 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.890874 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.891128 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.891166 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.891472 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.891541 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.891604 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.891743 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.891777 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.892101 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.892627 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.892977 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.893034 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.893097 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.893210 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.894983 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.895114 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.895467 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.895499 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.895678 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.895806 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.895789 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.896028 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.896234 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.896763 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.896945 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.897043 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.897054 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.897122 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.897342 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.897363 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.897398 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.897600 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.897738 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.898081 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.898198 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.898234 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.898472 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.898676 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.898726 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.898938 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.899029 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.899097 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.899178 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.899212 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.899682 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.899686 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.899922 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.900026 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.900103 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.900476 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.900915 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.901074 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.901572 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.901738 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.901938 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.902116 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.902375 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.902855 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.904235 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.904777 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.904935 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.904976 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.905566 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.905818 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.905897 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.905899 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.905911 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.905970 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.905979 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906082 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906201 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906397 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906527 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906546 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906596 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906726 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906989 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.906989 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.907596 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.907638 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.907853 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.907946 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.907965 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.908259 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.908506 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.908657 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.909214 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.909277 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.909291 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.909299 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.909429 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.909475 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.909774 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.909782 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.910010 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.910208 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.910298 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.910409 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.910462 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.910539 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.911872 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.912202 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.912233 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.912568 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.913261 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.913738 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.913759 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.914332 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.915421 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.915482 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.915819 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.915957 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916069 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916127 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916210 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916242 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916372 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916459 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916575 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916658 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916742 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916837 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.916981 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.917093 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.917188 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.917281 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.917444 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.917649 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.917891 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.917990 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918060 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918145 4888 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918232 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918296 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918365 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918447 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918565 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918644 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918734 4888 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918931 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919029 4888 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919108 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919185 4888 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919272 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919360 4888 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919501 4888 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919664 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919759 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919902 4888 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920012 4888 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920102 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920188 4888 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920265 4888 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920347 4888 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920427 4888 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920510 4888 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920681 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920760 4888 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921103 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921196 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921293 4888 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921379 4888 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921460 4888 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921534 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921658 4888 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919694 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.917202 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.917579 4888 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.921897 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:21.421875037 +0000 UTC m=+21.234225755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921900 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921924 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921939 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921953 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921965 4888 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921977 4888 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.921988 4888 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922008 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922019 4888 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922029 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922040 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922050 4888 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922064 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922074 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922086 4888 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922096 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922108 4888 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922119 4888 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922129 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922139 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922149 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922159 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922168 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922178 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922189 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922199 4888 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922209 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922219 4888 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922229 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922238 4888 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922248 4888 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922259 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922270 4888 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922280 4888 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922290 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922299 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922309 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922318 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922328 4888 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922337 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922347 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922356 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922366 4888 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922376 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922387 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922398 4888 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922407 4888 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922416 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922429 4888 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922438 4888 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922448 4888 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922457 4888 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922467 4888 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922477 4888 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922486 4888 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922497 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922507 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922516 4888 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922526 4888 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922535 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922547 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922558 4888 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922570 4888 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922580 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922589 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922599 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922610 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922622 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922631 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922641 4888 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922651 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922661 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922671 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922681 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922692 4888 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922702 4888 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922712 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922722 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922733 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922743 4888 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922753 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922763 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922774 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922785 4888 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922812 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922825 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922835 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922844 4888 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922856 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922866 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922876 4888 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922886 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922898 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922908 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922920 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922930 4888 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922941 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922953 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922963 4888 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922974 4888 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922984 4888 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.922995 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923005 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923018 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923029 4888 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923039 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923050 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923061 4888 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923071 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923081 4888 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923091 4888 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923101 4888 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923113 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923126 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923136 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923148 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923159 4888 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923169 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923179 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923190 4888 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923200 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923211 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923225 4888 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923235 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923245 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923258 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923269 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923279 4888 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923289 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.923299 4888 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918912 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.919149 4888 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.925626 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:21.425605488 +0000 UTC m=+21.237956196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.919511 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.918976 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.925913 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.926564 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.926822 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.926845 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.926857 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.926868 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.920876 4888 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.926903 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.927130 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.932057 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.932670 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.933398 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.950181 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.952664 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.953492 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.953784 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.954820 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.955347 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.955991 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.956872 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.958012 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.958645 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.958707 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.959369 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.960447 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.961171 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.962228 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.963054 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.963403 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.964332 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.966133 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.967002 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.968409 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.969169 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.969313 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.973835 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.975103 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.975172 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.975531 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.975642 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.976204 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.976975 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.977945 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.978531 4888 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.979478 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.979601 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.980651 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.981295 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.981334 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.981505 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.981527 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.981963 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.982629 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.983262 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.986596 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.989268 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: I1006 15:01:20.989790 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.990709 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.990740 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.990755 4888 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:20 crc kubenswrapper[4888]: E1006 15:01:20.990829 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:21.490806323 +0000 UTC m=+21.303157031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:20.994034 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:20.995132 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:20.995966 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.001899 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.004064 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.005071 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.005401 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.006462 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.016676 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.018562 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.018592 4888 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.018659 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:21.518637364 +0000 UTC m=+21.330988082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.017066 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.017098 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.019550 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.020585 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.021169 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.022749 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.023271 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.024111 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.024971 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.025874 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.026298 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.026752 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.027681 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.027777 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.027825 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph2kk\" (UniqueName: \"kubernetes.io/projected/4d552ea8-3df5-49d4-9cf2-25e2147ff628-kube-api-access-ph2kk\") pod \"node-resolver-rwfbx\" (UID: \"4d552ea8-3df5-49d4-9cf2-25e2147ff628\") " pod="openshift-dns/node-resolver-rwfbx" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.027965 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028001 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4d552ea8-3df5-49d4-9cf2-25e2147ff628-hosts-file\") pod \"node-resolver-rwfbx\" (UID: \"4d552ea8-3df5-49d4-9cf2-25e2147ff628\") " pod="openshift-dns/node-resolver-rwfbx" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028156 4888 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028235 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028333 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028728 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028765 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028788 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028821 4888 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028831 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028840 4888 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028849 4888 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028858 4888 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028869 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028897 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028908 4888 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028918 4888 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028927 4888 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028946 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028966 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028975 4888 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028984 4888 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.028995 4888 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.029004 4888 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.029013 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.029022 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.029031 4888 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.029040 4888 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.029152 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.042505 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.079570 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.108203 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.125939 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.129647 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph2kk\" (UniqueName: \"kubernetes.io/projected/4d552ea8-3df5-49d4-9cf2-25e2147ff628-kube-api-access-ph2kk\") pod \"node-resolver-rwfbx\" (UID: \"4d552ea8-3df5-49d4-9cf2-25e2147ff628\") " pod="openshift-dns/node-resolver-rwfbx" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.129689 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4d552ea8-3df5-49d4-9cf2-25e2147ff628-hosts-file\") pod \"node-resolver-rwfbx\" (UID: \"4d552ea8-3df5-49d4-9cf2-25e2147ff628\") " pod="openshift-dns/node-resolver-rwfbx" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.129837 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4d552ea8-3df5-49d4-9cf2-25e2147ff628-hosts-file\") pod \"node-resolver-rwfbx\" (UID: \"4d552ea8-3df5-49d4-9cf2-25e2147ff628\") " pod="openshift-dns/node-resolver-rwfbx" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.141470 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.151061 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph2kk\" (UniqueName: \"kubernetes.io/projected/4d552ea8-3df5-49d4-9cf2-25e2147ff628-kube-api-access-ph2kk\") pod \"node-resolver-rwfbx\" (UID: \"4d552ea8-3df5-49d4-9cf2-25e2147ff628\") " pod="openshift-dns/node-resolver-rwfbx" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.151657 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.158973 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.161007 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.166926 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.172921 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.185332 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.221549 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.232347 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rwfbx" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.244283 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.257849 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: W1006 15:01:21.258938 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d552ea8_3df5_49d4_9cf2_25e2147ff628.slice/crio-bc172dce5931903f2a21636bc48097adac53ab90a03486486a5e0f357ecf0655 WatchSource:0}: Error finding container bc172dce5931903f2a21636bc48097adac53ab90a03486486a5e0f357ecf0655: Status 404 returned error can't find the container with id bc172dce5931903f2a21636bc48097adac53ab90a03486486a5e0f357ecf0655 Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.270983 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.283843 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.340672 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-spjkk"] Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.341094 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.345071 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.345198 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.345252 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.345345 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.345546 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.356878 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.365914 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.378125 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.401134 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.411609 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.424900 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.432458 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.432533 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a145d9af-9431-4196-bd66-a095e39bf3ca-mcd-auth-proxy-config\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.432558 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a145d9af-9431-4196-bd66-a095e39bf3ca-rootfs\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.432580 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.432815 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:01:22.432766525 +0000 UTC m=+22.245117293 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.432844 4888 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.432899 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:22.432887658 +0000 UTC m=+22.245238376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.432608 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.432957 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9bnb\" (UniqueName: \"kubernetes.io/projected/a145d9af-9431-4196-bd66-a095e39bf3ca-kube-api-access-z9bnb\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.432915 4888 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.432980 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.433003 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a145d9af-9431-4196-bd66-a095e39bf3ca-proxy-tls\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.433033 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:22.433018072 +0000 UTC m=+22.245368790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.441206 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.534279 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9bnb\" (UniqueName: \"kubernetes.io/projected/a145d9af-9431-4196-bd66-a095e39bf3ca-kube-api-access-z9bnb\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.534318 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a145d9af-9431-4196-bd66-a095e39bf3ca-proxy-tls\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.534355 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.534379 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a145d9af-9431-4196-bd66-a095e39bf3ca-mcd-auth-proxy-config\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.534420 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.534437 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a145d9af-9431-4196-bd66-a095e39bf3ca-rootfs\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.534480 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a145d9af-9431-4196-bd66-a095e39bf3ca-rootfs\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.534651 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.534689 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.534702 4888 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.534768 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:22.534746286 +0000 UTC m=+22.347097004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.535193 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.535207 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.535217 4888 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:21 crc kubenswrapper[4888]: E1006 15:01:21.535248 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:22.535238719 +0000 UTC m=+22.347589527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.535399 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a145d9af-9431-4196-bd66-a095e39bf3ca-mcd-auth-proxy-config\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.538422 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a145d9af-9431-4196-bd66-a095e39bf3ca-proxy-tls\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.548727 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9bnb\" (UniqueName: \"kubernetes.io/projected/a145d9af-9431-4196-bd66-a095e39bf3ca-kube-api-access-z9bnb\") pod \"machine-config-daemon-spjkk\" (UID: \"a145d9af-9431-4196-bd66-a095e39bf3ca\") " pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.658911 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:01:21 crc kubenswrapper[4888]: W1006 15:01:21.668684 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda145d9af_9431_4196_bd66_a095e39bf3ca.slice/crio-5d385b7bd4935a130905df1d8ba48265010a14fc70e240013d3fc7449251238e WatchSource:0}: Error finding container 5d385b7bd4935a130905df1d8ba48265010a14fc70e240013d3fc7449251238e: Status 404 returned error can't find the container with id 5d385b7bd4935a130905df1d8ba48265010a14fc70e240013d3fc7449251238e Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.689964 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dk65d"] Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.690646 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.693135 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.693417 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.693545 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.693703 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.694681 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.696899 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hw8s9"] Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.697405 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.701351 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.701516 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.716785 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.724423 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.735978 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.744967 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.757094 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.768968 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.783187 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.792663 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.834207 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836471 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-cnibin\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836502 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-os-release\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836517 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-kubelet\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836533 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-conf-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836554 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cnibin\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836574 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-system-cni-dir\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836601 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-os-release\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836615 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-cni-bin\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836630 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-daemon-config\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836644 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-etc-kubernetes\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836658 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-socket-dir-parent\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836673 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836692 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-netns\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836706 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlv5\" (UniqueName: \"kubernetes.io/projected/a8a92e6a-76c9-4370-b509-56d6e41f99de-kube-api-access-qtlv5\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836719 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-hostroot\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836735 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfv5w\" (UniqueName: \"kubernetes.io/projected/22b737e9-61a2-4561-9dfe-6edb6ca1f976-kube-api-access-xfv5w\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836748 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-cni-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836768 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-system-cni-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836809 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836830 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cni-binary-copy\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836845 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-cni-multus\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836858 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-k8s-cni-cncf-io\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836871 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8a92e6a-76c9-4370-b509-56d6e41f99de-cni-binary-copy\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.836885 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-multus-certs\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.863414 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.882134 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.897974 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.914973 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.924755 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.932874 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938122 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-cnibin\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938155 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-os-release\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938169 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-kubelet\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938190 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cnibin\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938203 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-conf-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938221 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-system-cni-dir\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938246 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-os-release\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938260 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-cni-bin\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938275 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-socket-dir-parent\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938290 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-daemon-config\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938303 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-etc-kubernetes\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938317 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938330 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtlv5\" (UniqueName: \"kubernetes.io/projected/a8a92e6a-76c9-4370-b509-56d6e41f99de-kube-api-access-qtlv5\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938349 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-netns\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938365 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-hostroot\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938389 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfv5w\" (UniqueName: \"kubernetes.io/projected/22b737e9-61a2-4561-9dfe-6edb6ca1f976-kube-api-access-xfv5w\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938403 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-cni-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938417 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-system-cni-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938433 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cni-binary-copy\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938447 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938460 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-k8s-cni-cncf-io\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938473 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-cni-multus\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938488 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8a92e6a-76c9-4370-b509-56d6e41f99de-cni-binary-copy\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938507 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-multus-certs\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938559 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-multus-certs\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938607 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-cnibin\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938705 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-os-release\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938725 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-kubelet\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938743 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cnibin\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938762 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-conf-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938780 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-system-cni-dir\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938834 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-os-release\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938862 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-cni-bin\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938895 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-cni-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.938922 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-system-cni-dir\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.939104 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-socket-dir-parent\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.939463 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cni-binary-copy\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.939633 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a8a92e6a-76c9-4370-b509-56d6e41f99de-multus-daemon-config\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.939683 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-etc-kubernetes\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.939880 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22b737e9-61a2-4561-9dfe-6edb6ca1f976-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.939918 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-k8s-cni-cncf-io\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.939939 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-var-lib-cni-multus\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.940129 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22b737e9-61a2-4561-9dfe-6edb6ca1f976-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.940275 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-host-run-netns\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.940303 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a8a92e6a-76c9-4370-b509-56d6e41f99de-hostroot\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.940344 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a8a92e6a-76c9-4370-b509-56d6e41f99de-cni-binary-copy\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.969370 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.973640 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtlv5\" (UniqueName: \"kubernetes.io/projected/a8a92e6a-76c9-4370-b509-56d6e41f99de-kube-api-access-qtlv5\") pod \"multus-hw8s9\" (UID: \"a8a92e6a-76c9-4370-b509-56d6e41f99de\") " pod="openshift-multus/multus-hw8s9" Oct 06 15:01:21 crc kubenswrapper[4888]: I1006 15:01:21.978428 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfv5w\" (UniqueName: \"kubernetes.io/projected/22b737e9-61a2-4561-9dfe-6edb6ca1f976-kube-api-access-xfv5w\") pod \"multus-additional-cni-plugins-dk65d\" (UID: \"22b737e9-61a2-4561-9dfe-6edb6ca1f976\") " pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.011788 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dk65d" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.017255 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.029441 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hw8s9" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.045924 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.045963 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.045973 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"5d385b7bd4935a130905df1d8ba48265010a14fc70e240013d3fc7449251238e"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.047765 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rwfbx" event={"ID":"4d552ea8-3df5-49d4-9cf2-25e2147ff628","Type":"ContainerStarted","Data":"11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.047855 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rwfbx" event={"ID":"4d552ea8-3df5-49d4-9cf2-25e2147ff628","Type":"ContainerStarted","Data":"bc172dce5931903f2a21636bc48097adac53ab90a03486486a5e0f357ecf0655"} Oct 06 15:01:22 crc kubenswrapper[4888]: W1006 15:01:22.053862 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8a92e6a_76c9_4370_b509_56d6e41f99de.slice/crio-f7f0ad55a44bebcc39ab6159a0a31853a072814ebb39684a7c9e80af5c823e8f WatchSource:0}: Error finding container f7f0ad55a44bebcc39ab6159a0a31853a072814ebb39684a7c9e80af5c823e8f: Status 404 returned error can't find the container with id f7f0ad55a44bebcc39ab6159a0a31853a072814ebb39684a7c9e80af5c823e8f Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.056275 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"08c881ceeb1510cf8aa4e7dd9c74d19a4b740a71219390b844ebfae0080ee333"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.056648 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.057931 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.057971 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"22804255e32ee2551bea67a893cde855ed2b22cfd31baa5261aa2fc55a116ffd"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.060368 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.060787 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.062582 4888 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a" exitCode=255 Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.062650 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.062855 4888 scope.go:117] "RemoveContainer" containerID="a77152d1ea878f15c50975bfba7588111d950be35ca933a5b48b18eac44a9923" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.063782 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" event={"ID":"22b737e9-61a2-4561-9dfe-6edb6ca1f976","Type":"ContainerStarted","Data":"53a51a3d03eaa748a782781049155136f78473c76e6ba282e8f7f3659e97c5c4"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.071641 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.071709 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.071720 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fb6adcf41e1d2b5bc4a52bf99a7f05528992e13c4bbd862a75dc7e63c5256f7a"} Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.077542 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hzx2q"] Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.083230 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.086887 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.087055 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.087186 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.087312 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.087500 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.087594 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.087689 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.089189 4888 scope.go:117] "RemoveContainer" containerID="5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.089449 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.089490 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.091057 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.108343 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.123260 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77152d1ea878f15c50975bfba7588111d950be35ca933a5b48b18eac44a9923\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:14Z\\\",\\\"message\\\":\\\"W1006 15:01:03.963221 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1006 15:01:03.963837 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759762863 cert, and key in /tmp/serving-cert-1912478468/serving-signer.crt, /tmp/serving-cert-1912478468/serving-signer.key\\\\nI1006 15:01:04.357690 1 observer_polling.go:159] Starting file observer\\\\nW1006 15:01:04.359972 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1006 15:01:04.360156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 15:01:04.361601 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1912478468/tls.crt::/tmp/serving-cert-1912478468/tls.key\\\\\\\"\\\\nF1006 15:01:14.744571 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.136296 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.148269 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.159524 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.170647 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.181839 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.193750 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.214540 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.227133 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.239211 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241462 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-systemd\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241499 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-log-socket\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241527 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-netns\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241541 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-node-log\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241563 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-systemd-units\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241580 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-bin\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241596 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-netd\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241620 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-var-lib-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241645 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-etc-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241666 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241684 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-env-overrides\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241943 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-slash\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.241987 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-ovn\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.242003 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.242037 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phx28\" (UniqueName: \"kubernetes.io/projected/61cf5a40-f739-4ffe-8544-34bcd92aadc1-kube-api-access-phx28\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.242068 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovn-node-metrics-cert\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.242137 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-script-lib\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.242166 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-kubelet\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.242182 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.242214 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-config\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.257140 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:22Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.342980 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-systemd-units\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343022 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-bin\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343043 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-netd\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343063 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-var-lib-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343092 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-etc-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343109 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343129 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-env-overrides\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343150 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-slash\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343168 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-ovn\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343169 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-systemd-units\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343187 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343194 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-netd\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343229 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343261 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-etc-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343216 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovn-node-metrics-cert\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343304 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phx28\" (UniqueName: \"kubernetes.io/projected/61cf5a40-f739-4ffe-8544-34bcd92aadc1-kube-api-access-phx28\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343326 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-script-lib\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343350 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-kubelet\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343388 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343403 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-config\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343418 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-systemd\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343431 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-log-socket\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343473 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-node-log\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343496 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-netns\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343537 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-var-lib-openvswitch\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343550 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-netns\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343609 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-slash\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343736 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-log-socket\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343922 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-systemd\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343945 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-node-log\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.344012 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-kubelet\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.344051 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-ovn\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.344118 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.344122 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-env-overrides\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.343105 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-bin\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.344135 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-ovn-kubernetes\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.344391 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-config\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.344510 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-script-lib\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.346486 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovn-node-metrics-cert\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.364070 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phx28\" (UniqueName: \"kubernetes.io/projected/61cf5a40-f739-4ffe-8544-34bcd92aadc1-kube-api-access-phx28\") pod \"ovnkube-node-hzx2q\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.398393 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:22 crc kubenswrapper[4888]: W1006 15:01:22.409955 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cf5a40_f739_4ffe_8544_34bcd92aadc1.slice/crio-f54c6fa0aa9c55e2c090b2af2d7612a1d51198c16004b0010c94e5031bd7a89c WatchSource:0}: Error finding container f54c6fa0aa9c55e2c090b2af2d7612a1d51198c16004b0010c94e5031bd7a89c: Status 404 returned error can't find the container with id f54c6fa0aa9c55e2c090b2af2d7612a1d51198c16004b0010c94e5031bd7a89c Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.444045 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.444186 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.444250 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:01:24.444203239 +0000 UTC m=+24.256553967 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.444307 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.444308 4888 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.444454 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:24.444443856 +0000 UTC m=+24.256794574 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.444344 4888 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.444548 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:24.444525758 +0000 UTC m=+24.256876516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.545259 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.545315 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.545448 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.545466 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.545468 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.545513 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.545529 4888 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.545598 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:24.545571463 +0000 UTC m=+24.357922231 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.545478 4888 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.545914 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:24.545892072 +0000 UTC m=+24.358242870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.921280 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.921322 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.921400 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.921529 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.921573 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:22 crc kubenswrapper[4888]: E1006 15:01:22.921624 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.925357 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.926182 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.930456 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.931462 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 06 15:01:22 crc kubenswrapper[4888]: I1006 15:01:22.932727 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.075604 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1" exitCode=0 Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.075693 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.075751 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"f54c6fa0aa9c55e2c090b2af2d7612a1d51198c16004b0010c94e5031bd7a89c"} Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.078624 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.081011 4888 scope.go:117] "RemoveContainer" containerID="5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a" Oct 06 15:01:23 crc kubenswrapper[4888]: E1006 15:01:23.081191 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.081752 4888 generic.go:334] "Generic (PLEG): container finished" podID="22b737e9-61a2-4561-9dfe-6edb6ca1f976" containerID="6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a" exitCode=0 Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.081840 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" event={"ID":"22b737e9-61a2-4561-9dfe-6edb6ca1f976","Type":"ContainerDied","Data":"6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a"} Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.083686 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hw8s9" event={"ID":"a8a92e6a-76c9-4370-b509-56d6e41f99de","Type":"ContainerStarted","Data":"fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8"} Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.083711 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hw8s9" event={"ID":"a8a92e6a-76c9-4370-b509-56d6e41f99de","Type":"ContainerStarted","Data":"f7f0ad55a44bebcc39ab6159a0a31853a072814ebb39684a7c9e80af5c823e8f"} Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.091536 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.109468 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a77152d1ea878f15c50975bfba7588111d950be35ca933a5b48b18eac44a9923\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:14Z\\\",\\\"message\\\":\\\"W1006 15:01:03.963221 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1006 15:01:03.963837 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759762863 cert, and key in /tmp/serving-cert-1912478468/serving-signer.crt, /tmp/serving-cert-1912478468/serving-signer.key\\\\nI1006 15:01:04.357690 1 observer_polling.go:159] Starting file observer\\\\nW1006 15:01:04.359972 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1006 15:01:04.360156 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1006 15:01:04.361601 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1912478468/tls.crt::/tmp/serving-cert-1912478468/tls.key\\\\\\\"\\\\nF1006 15:01:14.744571 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.135634 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.155321 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.167437 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.190321 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.214304 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.235362 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.258773 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.270470 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.281981 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.294330 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.335073 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.358739 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.378961 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.417271 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.442925 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.457672 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.478150 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.491615 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.504876 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.522769 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.540374 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.554895 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.584954 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-h2xmp"] Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.585347 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.588890 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.589087 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.591552 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.591828 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.609459 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.632722 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.643421 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.657320 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.678859 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.696074 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.711884 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.726909 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.748531 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.758378 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0b81ef7f-121c-47c3-a360-af9e56447038-serviceca\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.758426 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b81ef7f-121c-47c3-a360-af9e56447038-host\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.758474 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft78d\" (UniqueName: \"kubernetes.io/projected/0b81ef7f-121c-47c3-a360-af9e56447038-kube-api-access-ft78d\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.763125 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.777422 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.789205 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.799740 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:23Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.859498 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft78d\" (UniqueName: \"kubernetes.io/projected/0b81ef7f-121c-47c3-a360-af9e56447038-kube-api-access-ft78d\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.859557 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0b81ef7f-121c-47c3-a360-af9e56447038-serviceca\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.859575 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b81ef7f-121c-47c3-a360-af9e56447038-host\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.859637 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b81ef7f-121c-47c3-a360-af9e56447038-host\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.860650 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0b81ef7f-121c-47c3-a360-af9e56447038-serviceca\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.878251 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft78d\" (UniqueName: \"kubernetes.io/projected/0b81ef7f-121c-47c3-a360-af9e56447038-kube-api-access-ft78d\") pod \"node-ca-h2xmp\" (UID: \"0b81ef7f-121c-47c3-a360-af9e56447038\") " pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: I1006 15:01:23.903859 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-h2xmp" Oct 06 15:01:23 crc kubenswrapper[4888]: W1006 15:01:23.964108 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b81ef7f_121c_47c3_a360_af9e56447038.slice/crio-dcc5e04968ed5ab7cbb8ad6ad6a5168a9287e3d619ecba92a02a06d268e36fe7 WatchSource:0}: Error finding container dcc5e04968ed5ab7cbb8ad6ad6a5168a9287e3d619ecba92a02a06d268e36fe7: Status 404 returned error can't find the container with id dcc5e04968ed5ab7cbb8ad6ad6a5168a9287e3d619ecba92a02a06d268e36fe7 Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.087311 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h2xmp" event={"ID":"0b81ef7f-121c-47c3-a360-af9e56447038","Type":"ContainerStarted","Data":"dcc5e04968ed5ab7cbb8ad6ad6a5168a9287e3d619ecba92a02a06d268e36fe7"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.089469 4888 generic.go:334] "Generic (PLEG): container finished" podID="22b737e9-61a2-4561-9dfe-6edb6ca1f976" containerID="f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791" exitCode=0 Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.089538 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" event={"ID":"22b737e9-61a2-4561-9dfe-6edb6ca1f976","Type":"ContainerDied","Data":"f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.091017 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.097626 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.097677 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.098139 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.098173 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.098184 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.098194 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.103434 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.116246 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.125195 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.135174 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.145587 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.158344 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.171472 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.183444 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.194607 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.202782 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.217565 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.231702 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.256567 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.272385 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.283162 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.296484 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.309109 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.320949 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.334985 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.346170 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.372866 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.413681 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.457073 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.464722 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.464939 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:01:28.464907728 +0000 UTC m=+28.277258446 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.465090 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.465183 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.465233 4888 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.465250 4888 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.465292 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:28.465279238 +0000 UTC m=+28.277629956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.465314 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:28.465305269 +0000 UTC m=+28.277656097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.491618 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.536918 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.565730 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.565781 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.565908 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.565926 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.565939 4888 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.565992 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:28.565974603 +0000 UTC m=+28.378325321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.566069 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.566119 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.566146 4888 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.566237 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:28.566202909 +0000 UTC m=+28.378553667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.572297 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:24Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.920670 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.920700 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:24 crc kubenswrapper[4888]: I1006 15:01:24.920857 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.921149 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.921266 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:24 crc kubenswrapper[4888]: E1006 15:01:24.921046 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.106257 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-h2xmp" event={"ID":"0b81ef7f-121c-47c3-a360-af9e56447038","Type":"ContainerStarted","Data":"8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33"} Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.111412 4888 generic.go:334] "Generic (PLEG): container finished" podID="22b737e9-61a2-4561-9dfe-6edb6ca1f976" containerID="193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4" exitCode=0 Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.111511 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" event={"ID":"22b737e9-61a2-4561-9dfe-6edb6ca1f976","Type":"ContainerDied","Data":"193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4"} Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.149448 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.169138 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.181923 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.194259 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.203307 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.214593 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.227283 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.244880 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.248022 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.248619 4888 scope.go:117] "RemoveContainer" containerID="5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a" Oct 06 15:01:25 crc kubenswrapper[4888]: E1006 15:01:25.248785 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.262085 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.275444 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.288721 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.298615 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.310075 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.320193 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.331932 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.340937 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.355464 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.367692 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.385171 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.397452 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.412564 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.454284 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.491861 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.533292 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.575917 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:25 crc kubenswrapper[4888]: I1006 15:01:25.615850 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:25Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.118980 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.121027 4888 generic.go:334] "Generic (PLEG): container finished" podID="22b737e9-61a2-4561-9dfe-6edb6ca1f976" containerID="912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525" exitCode=0 Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.121124 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" event={"ID":"22b737e9-61a2-4561-9dfe-6edb6ca1f976","Type":"ContainerDied","Data":"912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525"} Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.134399 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.146785 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.167126 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.185118 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.199977 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.214425 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.224199 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.236858 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.251945 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.262346 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.277883 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.290236 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.301831 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:26Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.921581 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.921581 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:26 crc kubenswrapper[4888]: E1006 15:01:26.921985 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:26 crc kubenswrapper[4888]: E1006 15:01:26.922022 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:26 crc kubenswrapper[4888]: I1006 15:01:26.921587 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:26 crc kubenswrapper[4888]: E1006 15:01:26.922127 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.082842 4888 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.084856 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.084905 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.084916 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.085055 4888 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.094864 4888 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.095712 4888 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.098006 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.098023 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.098031 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.098063 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.098073 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: E1006 15:01:27.111983 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.115401 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.115424 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.115433 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.115445 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.115453 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.126318 4888 generic.go:334] "Generic (PLEG): container finished" podID="22b737e9-61a2-4561-9dfe-6edb6ca1f976" containerID="84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6" exitCode=0 Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.126387 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" event={"ID":"22b737e9-61a2-4561-9dfe-6edb6ca1f976","Type":"ContainerDied","Data":"84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6"} Oct 06 15:01:27 crc kubenswrapper[4888]: E1006 15:01:27.129252 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.137868 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.137900 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.137908 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.137921 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.137952 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.142353 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: E1006 15:01:27.154428 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.154746 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.160353 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.160385 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.160393 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.160409 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.160420 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.166878 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: E1006 15:01:27.171533 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.175177 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.175205 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.175216 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.175233 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.175245 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.188837 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: E1006 15:01:27.191041 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: E1006 15:01:27.191145 4888 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.209894 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.209925 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.209935 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.209949 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.209963 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.234091 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.253670 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.268827 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.277684 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.289482 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.301166 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.312568 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.312595 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.312606 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.312619 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.312629 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.313160 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.327847 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.340106 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.414854 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.414882 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.414937 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.414955 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.415077 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.517744 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.517809 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.517824 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.517842 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.517853 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.620840 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.620905 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.620922 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.620945 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.620964 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.672414 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.675783 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.682154 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.683764 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.699440 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.715237 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.722824 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.722857 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.722867 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.722884 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.722893 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.736246 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.750321 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.765517 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.775447 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.786909 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.804696 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.820731 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.825856 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.825913 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.825928 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.825951 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.825966 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.838324 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.853211 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.867068 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.883469 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.899616 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.915034 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.928685 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.928745 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.928760 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.928777 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.928789 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:27Z","lastTransitionTime":"2025-10-06T15:01:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.938369 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.952378 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.968475 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:27 crc kubenswrapper[4888]: I1006 15:01:27.986234 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:27Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.003912 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.018143 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.031077 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.031564 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.031625 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.031692 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.031779 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.032488 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.048258 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.058958 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.075592 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.086936 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.133947 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.133983 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.133994 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.134010 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.134020 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.140334 4888 generic.go:334] "Generic (PLEG): container finished" podID="22b737e9-61a2-4561-9dfe-6edb6ca1f976" containerID="2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3" exitCode=0 Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.140425 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" event={"ID":"22b737e9-61a2-4561-9dfe-6edb6ca1f976","Type":"ContainerDied","Data":"2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3"} Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.156018 4888 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.156755 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.171286 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.187102 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.202727 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.214237 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.229736 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.236735 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.236829 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.236842 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.236859 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.236871 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.244836 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.271034 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.284963 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.297279 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.316224 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.327012 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.339327 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.339373 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.339385 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.339402 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.339411 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.341340 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.374248 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.442548 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.442589 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.442601 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.442621 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.442634 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.520049 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.520204 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.520241 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.520271 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:01:36.520236197 +0000 UTC m=+36.332586915 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.520353 4888 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.520404 4888 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.520425 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:36.520404862 +0000 UTC m=+36.332755590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.520522 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:36.520499764 +0000 UTC m=+36.332850482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.545821 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.545870 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.545883 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.545900 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.545912 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.621605 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.621670 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.621834 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.621855 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.621870 4888 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.621933 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:36.621913889 +0000 UTC m=+36.434264627 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.622026 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.622083 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.622108 4888 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.622227 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:36.622189896 +0000 UTC m=+36.434540654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.648628 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.648685 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.648705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.648728 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.648747 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.750553 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.750592 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.750601 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.750619 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.750636 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.852608 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.852639 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.852650 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.852665 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.852674 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.920448 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.920535 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.920621 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.920759 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.920963 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:28 crc kubenswrapper[4888]: E1006 15:01:28.921110 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.954828 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.954856 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.954864 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.954878 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:28 crc kubenswrapper[4888]: I1006 15:01:28.954885 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:28Z","lastTransitionTime":"2025-10-06T15:01:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.057265 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.057315 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.057332 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.057356 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.057374 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.149190 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" event={"ID":"22b737e9-61a2-4561-9dfe-6edb6ca1f976","Type":"ContainerStarted","Data":"9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.154361 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.160120 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.160162 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.160174 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.160190 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.160201 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.166109 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.181505 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.198546 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.210923 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.225515 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.241733 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.263093 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.263128 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.263137 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.263152 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.263162 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.263532 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.274952 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.289856 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.306016 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.317349 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.332930 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.346936 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.361077 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.365341 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.365371 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.365381 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.365396 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.365406 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.376242 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.388411 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.401018 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.413582 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.430893 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.443081 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.453566 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.467398 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.467459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.467470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.467511 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.467525 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.469867 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.480162 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.497440 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.511204 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.523346 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.534143 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.547259 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:29Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.569779 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.569829 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.569840 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.569855 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.569866 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.672283 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.672312 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.672320 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.672332 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.672341 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.773957 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.774231 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.774320 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.774416 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.774510 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.876860 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.876890 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.876900 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.876915 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.876925 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.978937 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.979208 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.979436 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.979603 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:29 crc kubenswrapper[4888]: I1006 15:01:29.979760 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:29Z","lastTransitionTime":"2025-10-06T15:01:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.082220 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.082509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.082590 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.082686 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.082766 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.157902 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.158280 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.158309 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.181766 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.183913 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.185847 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.185899 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.185917 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.185936 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.185950 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.197681 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.213593 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.224361 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.237897 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.251895 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.263079 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.272937 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.284515 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.288339 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.288394 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.288412 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.288433 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.288449 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.297297 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.314742 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.325569 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.336380 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.350414 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.363049 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.378402 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.390449 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.390487 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.390497 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.390512 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.390522 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.391638 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.402471 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.424835 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.442499 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.460773 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.471254 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.481554 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.490824 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.492338 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.492376 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.492387 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.492405 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.492418 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.504178 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.515112 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.531933 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.577650 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.594488 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.594530 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.594540 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.594555 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.594572 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.611680 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.697524 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.697581 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.697593 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.697610 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.697622 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.800099 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.800165 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.800184 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.800211 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.800231 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.902262 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.902304 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.902314 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.902331 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.902342 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:30Z","lastTransitionTime":"2025-10-06T15:01:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.921222 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:30 crc kubenswrapper[4888]: E1006 15:01:30.921357 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.921429 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:30 crc kubenswrapper[4888]: E1006 15:01:30.921541 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.921548 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:30 crc kubenswrapper[4888]: E1006 15:01:30.921615 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.933975 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.945229 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.958273 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.968450 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.983748 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:30 crc kubenswrapper[4888]: I1006 15:01:30.994858 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.004336 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.004367 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.004376 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.004389 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.004399 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.007857 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.020487 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.032958 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.044537 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.054253 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.091549 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.106411 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.106446 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.106460 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.106475 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.106487 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.135262 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.161765 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/0.log" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.165539 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591" exitCode=1 Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.165579 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.166172 4888 scope.go:117] "RemoveContainer" containerID="0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.181836 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.208705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.208738 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.208747 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.208761 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.208771 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.213535 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.259895 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.297658 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.311255 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.311343 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.311359 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.311406 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.311424 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.338474 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.373283 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.413533 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.413584 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.413606 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.413625 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.413636 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.417078 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.453880 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.494855 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.516500 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.516539 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.516548 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.516563 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.516574 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.537249 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.574522 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.612184 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.619073 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.619118 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.619128 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.619143 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.619153 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.655318 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.705037 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.721238 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.721267 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.721276 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.721290 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.721301 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.742785 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:30Z\\\",\\\"message\\\":\\\" event handler 4\\\\nI1006 15:01:30.952469 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 15:01:30.952926 6084 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.953093 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 15:01:30.952537 6084 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.952327 6084 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.953579 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 15:01:30.953597 6084 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 15:01:30.953606 6084 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 15:01:30.953919 6084 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 15:01:30.953956 6084 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.823874 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.823916 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.823927 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.823942 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.823953 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.926359 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.926422 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.926438 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.926461 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:31 crc kubenswrapper[4888]: I1006 15:01:31.926477 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:31Z","lastTransitionTime":"2025-10-06T15:01:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.028740 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.028777 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.028806 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.028824 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.028836 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.130746 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.130830 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.130840 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.130856 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.130865 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.170862 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/0.log" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.173739 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.174138 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.188127 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.208001 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.226049 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.233137 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.233168 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.233176 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.233189 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.233198 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.239231 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.252305 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.268580 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:30Z\\\",\\\"message\\\":\\\" event handler 4\\\\nI1006 15:01:30.952469 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 15:01:30.952926 6084 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.953093 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 15:01:30.952537 6084 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.952327 6084 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.953579 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 15:01:30.953597 6084 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 15:01:30.953606 6084 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 15:01:30.953919 6084 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 15:01:30.953956 6084 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.280224 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.291243 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.302385 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.312679 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.324293 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.335527 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.335563 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.335574 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.335589 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.335600 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.337128 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.350941 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.360666 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:32Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.437665 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.437697 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.437705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.437718 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.437727 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.540121 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.540180 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.540196 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.540218 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.540232 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.643832 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.643895 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.643912 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.643939 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.643965 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.746908 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.746951 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.746962 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.746983 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.746996 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.848833 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.849091 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.849167 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.849239 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.849310 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.921256 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:32 crc kubenswrapper[4888]: E1006 15:01:32.921776 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.921352 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.921257 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:32 crc kubenswrapper[4888]: E1006 15:01:32.922236 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:32 crc kubenswrapper[4888]: E1006 15:01:32.922439 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.951857 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.951898 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.951907 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.951924 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:32 crc kubenswrapper[4888]: I1006 15:01:32.951935 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:32Z","lastTransitionTime":"2025-10-06T15:01:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.055448 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.055522 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.055546 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.055575 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.055598 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.157665 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.157707 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.157723 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.157744 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.157758 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.178316 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/1.log" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.178891 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/0.log" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.184454 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b" exitCode=1 Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.184502 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.184542 4888 scope.go:117] "RemoveContainer" containerID="0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.185370 4888 scope.go:117] "RemoveContainer" containerID="bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b" Oct 06 15:01:33 crc kubenswrapper[4888]: E1006 15:01:33.185613 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.200459 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.214687 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.229891 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.242637 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.254182 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.259867 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.259901 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.259910 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.259928 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.259939 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.264889 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.277345 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.290752 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.308016 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:30Z\\\",\\\"message\\\":\\\" event handler 4\\\\nI1006 15:01:30.952469 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 15:01:30.952926 6084 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.953093 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 15:01:30.952537 6084 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.952327 6084 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.953579 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 15:01:30.953597 6084 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 15:01:30.953606 6084 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 15:01:30.953919 6084 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 15:01:30.953956 6084 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.319574 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.329946 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.342382 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.351607 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.361768 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.361828 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.361842 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.361858 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.361872 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.363284 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.464354 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.464387 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.464396 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.464410 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.464420 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.567596 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.567651 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.567668 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.567695 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.567713 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.670728 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.670779 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.670821 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.670856 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.670873 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.773666 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.773740 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.773770 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.773788 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.773815 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.876422 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.876482 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.876492 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.876512 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.876523 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.879877 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl"] Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.886638 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.890347 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.890503 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.920726 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6e45de6add61ece61a90f59a9dec0eacdfc8bb743c033e10863bb449042591\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:30Z\\\",\\\"message\\\":\\\" event handler 4\\\\nI1006 15:01:30.952469 6084 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 15:01:30.952926 6084 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.953093 6084 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 15:01:30.952537 6084 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.952327 6084 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 15:01:30.953579 6084 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 15:01:30.953597 6084 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1006 15:01:30.953606 6084 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1006 15:01:30.953919 6084 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 15:01:30.953956 6084 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.941707 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.956553 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.969515 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.978942 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.978988 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.979005 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.979026 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.979041 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:33Z","lastTransitionTime":"2025-10-06T15:01:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.980541 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vb7\" (UniqueName: \"kubernetes.io/projected/8bdba58e-334c-4ef0-8498-d233789c62b9-kube-api-access-q7vb7\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.980645 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bdba58e-334c-4ef0-8498-d233789c62b9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.980700 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bdba58e-334c-4ef0-8498-d233789c62b9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.980771 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bdba58e-334c-4ef0-8498-d233789c62b9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.983014 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:33 crc kubenswrapper[4888]: I1006 15:01:33.998508 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:33Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.020342 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.034575 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.045693 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.057859 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.066564 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.076032 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.080530 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.080572 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.080588 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.080609 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.080625 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.081297 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bdba58e-334c-4ef0-8498-d233789c62b9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.081340 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bdba58e-334c-4ef0-8498-d233789c62b9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.081366 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bdba58e-334c-4ef0-8498-d233789c62b9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.081386 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vb7\" (UniqueName: \"kubernetes.io/projected/8bdba58e-334c-4ef0-8498-d233789c62b9-kube-api-access-q7vb7\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.082213 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bdba58e-334c-4ef0-8498-d233789c62b9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.082513 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bdba58e-334c-4ef0-8498-d233789c62b9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.086442 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bdba58e-334c-4ef0-8498-d233789c62b9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.090538 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.100161 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vb7\" (UniqueName: \"kubernetes.io/projected/8bdba58e-334c-4ef0-8498-d233789c62b9-kube-api-access-q7vb7\") pod \"ovnkube-control-plane-749d76644c-p4wzl\" (UID: \"8bdba58e-334c-4ef0-8498-d233789c62b9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.102082 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.114072 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.184124 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.184189 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.184213 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.184241 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.184261 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.190450 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/1.log" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.195164 4888 scope.go:117] "RemoveContainer" containerID="bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b" Oct 06 15:01:34 crc kubenswrapper[4888]: E1006 15:01:34.195492 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.204361 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.207912 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.230195 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.248020 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.258199 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.270658 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.282353 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.286147 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.286175 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.286184 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.286197 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.286206 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.295374 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.309777 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.322259 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.338994 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.349917 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.361343 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.378740 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.388651 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.388697 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.388743 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.388762 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.388774 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.395359 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.411021 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.491756 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.491791 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.491815 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.491830 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.491840 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.593369 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.593404 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.593412 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.593427 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.593436 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.695678 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.695718 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.695726 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.695742 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.695752 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.797717 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.797758 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.797770 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.797785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.797811 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.899780 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.899832 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.899849 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.899870 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.899881 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:34Z","lastTransitionTime":"2025-10-06T15:01:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.921188 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.921244 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:34 crc kubenswrapper[4888]: E1006 15:01:34.921306 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:34 crc kubenswrapper[4888]: E1006 15:01:34.921463 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.921569 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:34 crc kubenswrapper[4888]: E1006 15:01:34.921691 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.984898 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hm59m"] Oct 06 15:01:34 crc kubenswrapper[4888]: I1006 15:01:34.985400 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:34 crc kubenswrapper[4888]: E1006 15:01:34.985482 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.000760 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:34Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.002128 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.002175 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.002191 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.002213 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.002231 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.015098 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.028192 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.037127 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.046596 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.058994 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.070884 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.081978 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.091611 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62b7c\" (UniqueName: \"kubernetes.io/projected/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-kube-api-access-62b7c\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.091611 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.091678 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.104353 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.104406 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.104424 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.104447 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.104460 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.110064 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.122864 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.133302 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.143076 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.152442 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.163151 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.175692 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.192596 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62b7c\" (UniqueName: \"kubernetes.io/projected/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-kube-api-access-62b7c\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.192655 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:35 crc kubenswrapper[4888]: E1006 15:01:35.192757 4888 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:35 crc kubenswrapper[4888]: E1006 15:01:35.192834 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs podName:2aee40f4-3a30-43cb-aa49-aabcf3c074b7 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:35.692791666 +0000 UTC m=+35.505142384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs") pod "network-metrics-daemon-hm59m" (UID: "2aee40f4-3a30-43cb-aa49-aabcf3c074b7") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.197986 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" event={"ID":"8bdba58e-334c-4ef0-8498-d233789c62b9","Type":"ContainerStarted","Data":"be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.198025 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" event={"ID":"8bdba58e-334c-4ef0-8498-d233789c62b9","Type":"ContainerStarted","Data":"607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.198034 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" event={"ID":"8bdba58e-334c-4ef0-8498-d233789c62b9","Type":"ContainerStarted","Data":"1fa2b523b299335fe38ce682a7ba46bde23172c42593ec51ff0fc2e5da0c9d58"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.206172 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.206220 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.206233 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.206252 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.206264 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.208343 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62b7c\" (UniqueName: \"kubernetes.io/projected/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-kube-api-access-62b7c\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.213590 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.222903 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.232183 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.243479 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.255153 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.267405 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.281171 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.293701 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.307682 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.308336 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.308370 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.308379 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.308391 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.308400 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.318717 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.327879 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.338549 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.351226 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.366944 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.378447 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.410709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.410745 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.410754 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.410769 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.410781 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.414403 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:35Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.514093 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.514169 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.514190 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.514214 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.514230 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.616748 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.616833 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.616851 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.616880 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.616899 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.699323 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:35 crc kubenswrapper[4888]: E1006 15:01:35.699605 4888 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:35 crc kubenswrapper[4888]: E1006 15:01:35.699747 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs podName:2aee40f4-3a30-43cb-aa49-aabcf3c074b7 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:36.699716126 +0000 UTC m=+36.512067024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs") pod "network-metrics-daemon-hm59m" (UID: "2aee40f4-3a30-43cb-aa49-aabcf3c074b7") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.719904 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.719980 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.720007 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.720041 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.720064 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.822467 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.822503 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.822512 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.822526 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.822537 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.925331 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.925379 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.925395 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.925413 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:35 crc kubenswrapper[4888]: I1006 15:01:35.925428 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:35Z","lastTransitionTime":"2025-10-06T15:01:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.028324 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.028394 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.028409 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.028434 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.028452 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.131782 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.131944 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.131972 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.132006 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.132055 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.235692 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.235746 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.235758 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.235778 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.235811 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.338194 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.338265 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.338277 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.338296 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.338309 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.441295 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.441327 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.441336 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.441351 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.441360 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.544711 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.544767 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.544785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.544842 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.544860 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.610630 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.610774 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.610847 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:01:52.610779694 +0000 UTC m=+52.423130442 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.610908 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.611013 4888 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.611059 4888 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.611117 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:52.611098093 +0000 UTC m=+52.423448841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.611144 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:52.611132173 +0000 UTC m=+52.423482921 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.647274 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.647320 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.647336 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.647359 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.647374 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.712153 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.712213 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.712262 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712379 4888 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712400 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712458 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712504 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712519 4888 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712474 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs podName:2aee40f4-3a30-43cb-aa49-aabcf3c074b7 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:38.712451006 +0000 UTC m=+38.524801724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs") pod "network-metrics-daemon-hm59m" (UID: "2aee40f4-3a30-43cb-aa49-aabcf3c074b7") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712469 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712619 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:52.71259394 +0000 UTC m=+52.524944708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712625 4888 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.712686 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:52.712662162 +0000 UTC m=+52.525012940 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.750088 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.750152 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.750163 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.750181 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.750191 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.853147 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.853178 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.853186 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.853199 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.853207 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.920455 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.920504 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.920549 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.920455 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.920685 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.920891 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.921042 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:36 crc kubenswrapper[4888]: E1006 15:01:36.921206 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.956882 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.956948 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.956972 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.957000 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:36 crc kubenswrapper[4888]: I1006 15:01:36.957017 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:36Z","lastTransitionTime":"2025-10-06T15:01:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.059404 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.059447 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.059462 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.059483 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.059499 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.161835 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.161867 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.161877 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.161891 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.161902 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.265652 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.266624 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.266678 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.266706 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.266724 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.328356 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.328701 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.328717 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.328743 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.328761 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: E1006 15:01:37.344064 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:37Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.348577 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.348640 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.348661 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.348691 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.348712 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: E1006 15:01:37.368361 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:37Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.373524 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.373577 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.373587 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.373609 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.373624 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: E1006 15:01:37.385346 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:37Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.390019 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.390066 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.390083 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.390104 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.390119 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: E1006 15:01:37.405141 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:37Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.408734 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.408775 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.408786 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.408823 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.408834 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: E1006 15:01:37.423626 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:37Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:37 crc kubenswrapper[4888]: E1006 15:01:37.423883 4888 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.425699 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.425738 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.425768 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.425788 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.425809 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.528553 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.528597 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.528607 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.528624 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.528635 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.631400 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.631462 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.631481 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.631505 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.631521 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.734785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.734902 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.734925 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.734957 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.734979 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.837769 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.837893 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.837920 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.837955 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.837978 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.941062 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.941097 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.941110 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.941126 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:37 crc kubenswrapper[4888]: I1006 15:01:37.941137 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:37Z","lastTransitionTime":"2025-10-06T15:01:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.043478 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.043526 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.043537 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.043551 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.043561 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.146385 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.146419 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.146463 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.146480 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.146490 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.249477 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.249521 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.249535 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.249552 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.249565 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.352090 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.352133 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.352146 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.352163 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.352176 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.454320 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.454359 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.454370 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.454386 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.454398 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.558031 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.558100 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.558112 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.558129 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.558141 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.660770 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.660853 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.660870 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.660891 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.660907 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.734887 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:38 crc kubenswrapper[4888]: E1006 15:01:38.735097 4888 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:38 crc kubenswrapper[4888]: E1006 15:01:38.735222 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs podName:2aee40f4-3a30-43cb-aa49-aabcf3c074b7 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:42.735188991 +0000 UTC m=+42.547539759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs") pod "network-metrics-daemon-hm59m" (UID: "2aee40f4-3a30-43cb-aa49-aabcf3c074b7") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.763491 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.763534 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.763543 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.763558 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.763567 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.866626 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.866696 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.866712 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.866763 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.866778 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.920647 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.920691 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.920698 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.920724 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:38 crc kubenswrapper[4888]: E1006 15:01:38.921483 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.921647 4888 scope.go:117] "RemoveContainer" containerID="5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a" Oct 06 15:01:38 crc kubenswrapper[4888]: E1006 15:01:38.921702 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:38 crc kubenswrapper[4888]: E1006 15:01:38.921905 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:38 crc kubenswrapper[4888]: E1006 15:01:38.921990 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.969642 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.969691 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.969707 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.969729 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:38 crc kubenswrapper[4888]: I1006 15:01:38.969744 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:38Z","lastTransitionTime":"2025-10-06T15:01:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.072362 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.072419 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.072433 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.072453 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.072468 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.175430 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.175567 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.175582 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.175602 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.175616 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.218425 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.220352 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.220720 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.235742 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.251682 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.263285 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.277162 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.278150 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.278180 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.278192 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.278209 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.278219 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.291198 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.304582 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.319400 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.337673 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.350377 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.359909 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.368424 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.377173 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.380627 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.380671 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.380680 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.380694 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.380703 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.387925 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.400178 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.417896 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.431602 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:39Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.483467 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.483517 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.483526 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.483542 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.483552 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.586453 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.586493 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.586503 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.586518 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.586529 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.689426 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.689470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.689482 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.689499 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.689512 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.793227 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.793295 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.793313 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.793344 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.793449 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.896880 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.896945 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.896962 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.896987 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.897005 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.999705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.999744 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.999757 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.999773 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:39 crc kubenswrapper[4888]: I1006 15:01:39.999785 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:39Z","lastTransitionTime":"2025-10-06T15:01:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.102048 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.102080 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.102089 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.102102 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.102114 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.204871 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.205105 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.205230 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.205330 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.205429 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.308042 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.308092 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.308109 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.308132 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.308148 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.409809 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.409950 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.409965 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.409982 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.410002 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.512951 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.512994 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.513011 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.513026 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.513034 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.615479 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.615513 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.615521 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.615538 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.615547 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.717508 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.717569 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.717587 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.717610 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.717626 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.821165 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.821243 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.821286 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.821319 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.821343 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.921069 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.921226 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.921098 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:40 crc kubenswrapper[4888]: E1006 15:01:40.921492 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.921265 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:40 crc kubenswrapper[4888]: E1006 15:01:40.921594 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:40 crc kubenswrapper[4888]: E1006 15:01:40.921242 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:40 crc kubenswrapper[4888]: E1006 15:01:40.921667 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.923253 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.923341 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.923357 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.923376 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.923387 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:40Z","lastTransitionTime":"2025-10-06T15:01:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.943637 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:40Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.959682 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:40Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.977292 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:40Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:40 crc kubenswrapper[4888]: I1006 15:01:40.991173 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:40Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.003530 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.015238 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.025747 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.025807 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.025820 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.025835 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.025846 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.028498 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.042913 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.064663 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.078954 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.093556 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.111429 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.121247 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.128600 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.128660 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.128676 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.128692 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.128704 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.132071 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.144528 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.156734 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:41Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.231327 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.231378 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.231388 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.231406 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.231416 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.333839 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.333888 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.333905 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.333925 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.333940 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.436551 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.436614 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.436631 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.436652 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.436672 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.539107 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.539187 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.539207 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.539232 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.539250 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.641383 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.641426 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.641440 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.641457 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.641470 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.743686 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.743734 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.743746 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.743764 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.743777 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.846445 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.846499 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.846510 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.846526 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.846535 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.949209 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.949253 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.949264 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.949280 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:41 crc kubenswrapper[4888]: I1006 15:01:41.949292 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:41Z","lastTransitionTime":"2025-10-06T15:01:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.053024 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.053295 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.053315 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.053345 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.053367 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.155759 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.155845 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.155884 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.155902 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.155913 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.259156 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.259184 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.259193 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.259207 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.259217 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.361936 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.361988 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.362004 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.362024 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.362038 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.465441 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.465509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.465519 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.465538 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.465549 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.569076 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.569128 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.569154 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.569182 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.569199 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.671582 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.671637 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.671650 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.671667 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.671679 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.774149 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.774202 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.774213 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.774236 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.774246 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.777875 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:42 crc kubenswrapper[4888]: E1006 15:01:42.778048 4888 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:42 crc kubenswrapper[4888]: E1006 15:01:42.778123 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs podName:2aee40f4-3a30-43cb-aa49-aabcf3c074b7 nodeName:}" failed. No retries permitted until 2025-10-06 15:01:50.778104409 +0000 UTC m=+50.590455127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs") pod "network-metrics-daemon-hm59m" (UID: "2aee40f4-3a30-43cb-aa49-aabcf3c074b7") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.878584 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.878644 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.878659 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.878675 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.878684 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.920935 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.921151 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:42 crc kubenswrapper[4888]: E1006 15:01:42.921262 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.921307 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:42 crc kubenswrapper[4888]: E1006 15:01:42.921483 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:42 crc kubenswrapper[4888]: E1006 15:01:42.921641 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.921668 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:42 crc kubenswrapper[4888]: E1006 15:01:42.921783 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.981448 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.981483 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.981493 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.981507 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:42 crc kubenswrapper[4888]: I1006 15:01:42.981516 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:42Z","lastTransitionTime":"2025-10-06T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.083851 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.084231 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.084374 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.084479 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.084573 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.188221 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.188272 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.188289 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.188312 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.188329 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.291350 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.291403 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.291415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.291438 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.291452 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.393992 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.394038 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.394048 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.394063 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.394073 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.496456 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.496535 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.496559 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.496592 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.496614 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.599538 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.599595 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.599609 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.599626 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.599639 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.703125 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.703193 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.703205 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.703223 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.703235 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.805686 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.805757 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.805779 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.805844 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.805871 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.908372 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.908442 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.908459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.908485 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:43 crc kubenswrapper[4888]: I1006 15:01:43.908502 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:43Z","lastTransitionTime":"2025-10-06T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.011509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.011587 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.011598 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.011648 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.011662 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.114658 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.114730 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.114753 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.114784 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.114827 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.217926 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.217979 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.217998 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.218022 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.218039 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.321392 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.321454 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.321475 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.321503 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.321522 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.424861 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.424942 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.424963 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.425448 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.425514 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.529423 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.529504 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.529528 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.529565 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.529589 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.632136 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.632185 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.632196 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.632214 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.632226 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.734705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.734751 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.734765 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.734783 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.734816 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.836524 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.836563 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.836579 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.836598 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.836613 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.920600 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.920639 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:44 crc kubenswrapper[4888]: E1006 15:01:44.920743 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.920761 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.920820 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:44 crc kubenswrapper[4888]: E1006 15:01:44.921136 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:44 crc kubenswrapper[4888]: E1006 15:01:44.921226 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:44 crc kubenswrapper[4888]: E1006 15:01:44.921309 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.921494 4888 scope.go:117] "RemoveContainer" containerID="bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.938820 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.938851 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.938859 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.938873 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:44 crc kubenswrapper[4888]: I1006 15:01:44.938884 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:44Z","lastTransitionTime":"2025-10-06T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.041474 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.041849 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.041870 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.041894 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.041912 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.144344 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.144366 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.144375 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.144389 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.144398 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.245287 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/1.log" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.246292 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.246318 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.246329 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.246344 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.246356 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.248308 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.248739 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.265882 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.290459 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.312914 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.324768 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.339778 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.348373 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.348414 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.348426 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.348442 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.348453 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.356758 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.375558 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.389370 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.400904 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.410899 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.419004 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.428013 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.443379 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.450461 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.450503 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.450513 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.450530 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.450544 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.458074 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.472422 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.484740 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:45Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.552442 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.552484 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.552495 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.552514 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.552525 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.655114 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.655155 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.655165 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.655182 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.655194 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.757969 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.758014 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.758025 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.758044 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.758056 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.860614 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.860651 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.860659 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.860672 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.860681 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.963369 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.963400 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.963408 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.963422 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:45 crc kubenswrapper[4888]: I1006 15:01:45.963432 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:45Z","lastTransitionTime":"2025-10-06T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.066160 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.066242 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.066267 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.066299 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.066328 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.168971 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.169019 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.169031 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.169049 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.169060 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.254573 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/2.log" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.255837 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/1.log" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.259236 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70" exitCode=1 Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.259295 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.259352 4888 scope.go:117] "RemoveContainer" containerID="bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.260526 4888 scope.go:117] "RemoveContainer" containerID="22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70" Oct 06 15:01:46 crc kubenswrapper[4888]: E1006 15:01:46.260918 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.272294 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.272336 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.272348 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.272367 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.272381 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.276156 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.291553 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.303153 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.320031 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.335321 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.356605 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb267680c024da28c0ffc1ddeeab3f8dd016cc57aa477821dc9b946958e72c7b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:32Z\\\",\\\"message\\\":\\\"ble:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 15:01:32.445938 6211 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1006 15:01:32.446025 6211 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.374631 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.374675 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.374697 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.374714 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.374723 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.376114 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.391641 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.408132 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.418909 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.431406 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.444910 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.460932 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.473572 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.477211 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.477241 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.477250 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.477265 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.477275 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.487864 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.499515 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:46Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.580370 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.580410 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.580418 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.580432 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.580443 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.684081 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.684125 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.684140 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.684158 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.684171 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.787572 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.787618 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.787627 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.787644 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.787656 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.889495 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.889534 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.889543 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.889561 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.889571 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.921334 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.921466 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.921479 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.921734 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:46 crc kubenswrapper[4888]: E1006 15:01:46.921890 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:46 crc kubenswrapper[4888]: E1006 15:01:46.922027 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:46 crc kubenswrapper[4888]: E1006 15:01:46.922192 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:46 crc kubenswrapper[4888]: E1006 15:01:46.922242 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.992416 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.992465 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.992476 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.992493 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:46 crc kubenswrapper[4888]: I1006 15:01:46.992505 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:46Z","lastTransitionTime":"2025-10-06T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.095075 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.095171 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.095190 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.095216 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.095232 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.197216 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.197286 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.197295 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.197309 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.197318 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.264280 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/2.log" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.267891 4888 scope.go:117] "RemoveContainer" containerID="22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70" Oct 06 15:01:47 crc kubenswrapper[4888]: E1006 15:01:47.268067 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.280325 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.290949 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.299443 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.299493 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.299509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.299529 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.299544 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.303957 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.314268 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.332582 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.345714 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.355353 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.366310 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.376856 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.386585 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.398376 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.401777 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.401823 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.401833 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.401845 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.401855 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.410380 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.420906 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.433153 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.442311 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.451222 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.504285 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.504332 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.504345 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.504559 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.504570 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.541648 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.541719 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.541744 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.541772 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.541827 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: E1006 15:01:47.562477 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.566025 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.566061 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.566070 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.566087 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.566098 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: E1006 15:01:47.576846 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.580163 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.580202 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.580215 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.580235 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.580249 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: E1006 15:01:47.592528 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.595553 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.595582 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.595591 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.595606 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.595616 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: E1006 15:01:47.606764 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.610013 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.610058 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.610071 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.610086 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.610097 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: E1006 15:01:47.621945 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:47Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:47 crc kubenswrapper[4888]: E1006 15:01:47.622054 4888 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.623876 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.623922 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.623932 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.623948 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.623957 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.726726 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.726823 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.726837 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.726857 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.726869 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.829860 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.829909 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.829924 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.829944 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.829959 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.931927 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.931962 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.931971 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.931984 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:47 crc kubenswrapper[4888]: I1006 15:01:47.931993 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:47Z","lastTransitionTime":"2025-10-06T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.034244 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.034310 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.034323 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.034343 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.034355 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.136515 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.136596 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.136633 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.136664 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.136698 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.239292 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.239354 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.239373 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.239393 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.239407 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.342531 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.342603 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.342622 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.342650 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.342673 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.445523 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.445590 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.445609 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.445634 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.445654 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.548311 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.548355 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.548370 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.548393 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.548410 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.676263 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.676322 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.676332 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.676345 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.676353 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.779572 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.779661 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.779675 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.779694 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.779706 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.881950 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.882003 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.882021 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.882042 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.882057 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.920997 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:48 crc kubenswrapper[4888]: E1006 15:01:48.921223 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.921708 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.922052 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.921781 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:48 crc kubenswrapper[4888]: E1006 15:01:48.922374 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:48 crc kubenswrapper[4888]: E1006 15:01:48.922416 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:48 crc kubenswrapper[4888]: E1006 15:01:48.922567 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.984255 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.984306 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.984326 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.984352 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:48 crc kubenswrapper[4888]: I1006 15:01:48.984373 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:48Z","lastTransitionTime":"2025-10-06T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.086946 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.087052 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.087077 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.087110 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.087131 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.189229 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.189284 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.189302 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.189324 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.189340 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.292309 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.292613 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.292713 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.292823 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.292887 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.395622 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.395905 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.395987 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.396054 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.396131 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.499199 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.499233 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.499244 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.499262 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.499290 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.601404 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.601463 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.601474 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.601490 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.601501 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.705453 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.705502 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.705511 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.705527 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.705539 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.808512 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.808565 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.808586 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.808613 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.808630 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.911927 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.911984 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.911995 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.912016 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:49 crc kubenswrapper[4888]: I1006 15:01:49.912034 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:49Z","lastTransitionTime":"2025-10-06T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.014791 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.014869 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.014880 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.014902 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.014914 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.117873 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.117913 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.117922 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.117937 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.117947 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.221164 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.221222 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.221239 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.221262 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.221279 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.323876 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.323941 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.323953 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.323988 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.324000 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.427100 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.427170 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.427185 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.427205 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.427218 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.530415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.530532 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.530545 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.530570 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.530584 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.632968 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.633007 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.633017 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.633034 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.633045 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.735298 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.735355 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.735371 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.735395 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.735411 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.838928 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.838977 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.838993 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.839015 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.839032 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.861631 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:50 crc kubenswrapper[4888]: E1006 15:01:50.861891 4888 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:50 crc kubenswrapper[4888]: E1006 15:01:50.862040 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs podName:2aee40f4-3a30-43cb-aa49-aabcf3c074b7 nodeName:}" failed. No retries permitted until 2025-10-06 15:02:06.862010457 +0000 UTC m=+66.674361345 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs") pod "network-metrics-daemon-hm59m" (UID: "2aee40f4-3a30-43cb-aa49-aabcf3c074b7") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.920940 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:50 crc kubenswrapper[4888]: E1006 15:01:50.921074 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.921179 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:50 crc kubenswrapper[4888]: E1006 15:01:50.921243 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.921297 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:50 crc kubenswrapper[4888]: E1006 15:01:50.921342 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.921384 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:50 crc kubenswrapper[4888]: E1006 15:01:50.921424 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.936341 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:50Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.940716 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.940812 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.940852 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.940873 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.940885 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:50Z","lastTransitionTime":"2025-10-06T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.951732 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:50Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.967107 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:50Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:50 crc kubenswrapper[4888]: I1006 15:01:50.980853 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:50Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.002149 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:50Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.014091 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.024953 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.033825 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.042885 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.042916 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.042925 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.042938 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.042947 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.043607 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.057290 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.069081 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.089621 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.102826 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.122786 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.137353 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.145576 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.145644 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.145659 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.145680 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.145695 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.150823 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.249015 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.249081 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.249122 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.249157 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.249179 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.352470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.352522 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.352534 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.352561 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.352573 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.455557 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.455604 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.455616 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.455635 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.455649 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.558007 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.558048 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.558057 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.558071 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.558086 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.660087 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.660126 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.660135 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.660149 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.660158 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.681886 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.694597 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.698373 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.710440 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.724143 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.738296 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.758538 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.762582 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.762639 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.762653 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.762672 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.762685 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.774677 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.788847 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.804641 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.816488 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.829173 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.844530 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.858134 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.865047 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.865099 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.865110 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.865127 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.865136 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.869112 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.881865 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.894444 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.906253 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:51Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.967651 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.967693 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.967704 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.967720 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:51 crc kubenswrapper[4888]: I1006 15:01:51.967732 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:51Z","lastTransitionTime":"2025-10-06T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.070076 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.070122 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.070133 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.070148 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.070158 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.176755 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.176849 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.176867 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.176892 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.176910 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.278862 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.278915 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.278941 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.278970 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.278991 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.381375 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.381444 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.381462 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.381487 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.381505 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.484312 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.484379 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.484388 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.484401 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.484410 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.587465 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.587516 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.587528 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.587547 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.587559 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.681766 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.681954 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.681984 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:24.681947361 +0000 UTC m=+84.494298079 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.682083 4888 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.682094 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.682160 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:02:24.682135236 +0000 UTC m=+84.494485954 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.682206 4888 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.682261 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:02:24.68224788 +0000 UTC m=+84.494598808 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.690766 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.690828 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.690849 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.690871 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.690903 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.783954 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.784064 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.784281 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.784304 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.784320 4888 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.784400 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 15:02:24.78437889 +0000 UTC m=+84.596729618 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.784402 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.784470 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.784502 4888 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.784640 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 15:02:24.784595536 +0000 UTC m=+84.596946404 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.793726 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.793792 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.793872 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.793909 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.793931 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.897008 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.897056 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.897070 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.897088 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.897102 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:52Z","lastTransitionTime":"2025-10-06T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.920597 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.920659 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.920609 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:52 crc kubenswrapper[4888]: I1006 15:01:52.920730 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.920714 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.920863 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.920986 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:52 crc kubenswrapper[4888]: E1006 15:01:52.921024 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:52.999950 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.000006 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.000021 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.000042 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.000058 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.102706 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.102754 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.102765 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.102782 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.102814 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.205173 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.205216 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.205225 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.205242 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.205255 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.307099 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.307161 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.307181 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.307207 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.307224 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.410006 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.410038 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.410049 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.410067 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.410079 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.512646 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.512709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.512721 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.512740 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.512752 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.616073 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.616134 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.616166 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.616205 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.616225 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.718918 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.718975 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.718991 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.719015 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.719034 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.821673 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.821712 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.821723 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.821739 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.821750 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.923717 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.923756 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.923767 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.923785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:53 crc kubenswrapper[4888]: I1006 15:01:53.923810 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:53Z","lastTransitionTime":"2025-10-06T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.026299 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.026359 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.026366 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.026378 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.026387 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.129023 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.129062 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.129074 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.129089 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.129098 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.230985 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.231025 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.231043 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.231062 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.231078 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.333146 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.333191 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.333207 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.333228 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.333239 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.436605 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.436667 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.436680 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.436704 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.436717 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.539986 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.540032 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.540047 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.540068 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.540084 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.642563 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.642943 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.643104 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.643291 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.643425 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.746052 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.746104 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.746116 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.746148 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.746161 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.848215 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.848280 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.848302 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.848330 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.848353 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.921041 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.921081 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.921203 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:54 crc kubenswrapper[4888]: E1006 15:01:54.921210 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.921254 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:54 crc kubenswrapper[4888]: E1006 15:01:54.921385 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:54 crc kubenswrapper[4888]: E1006 15:01:54.921514 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:54 crc kubenswrapper[4888]: E1006 15:01:54.921606 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.950202 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.950425 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.950502 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.950584 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:54 crc kubenswrapper[4888]: I1006 15:01:54.950655 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:54Z","lastTransitionTime":"2025-10-06T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.052550 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.052593 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.052605 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.052620 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.052633 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.155373 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.155438 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.155456 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.155481 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.155500 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.259167 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.259235 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.259255 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.259280 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.259296 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.362663 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.362737 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.362766 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.362828 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.362855 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.464918 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.464969 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.464981 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.464998 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.465012 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.568141 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.568172 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.568181 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.568194 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.568204 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.670446 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.670592 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.670624 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.670656 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.670683 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.773985 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.774074 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.774086 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.774110 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.774128 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.876681 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.876735 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.876746 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.876764 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.876777 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.980131 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.980234 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.980265 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.980295 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:55 crc kubenswrapper[4888]: I1006 15:01:55.980318 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:55Z","lastTransitionTime":"2025-10-06T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.083237 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.083310 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.083329 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.083357 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.083375 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.187521 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.187591 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.187616 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.187649 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.187677 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.290179 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.290234 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.290270 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.290292 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.290304 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.393232 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.393288 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.393305 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.393330 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.393347 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.495528 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.495559 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.495569 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.495585 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.495597 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.598384 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.598429 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.598441 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.598459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.598471 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.700639 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.700694 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.700709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.700731 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.700744 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.804174 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.804232 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.804255 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.804286 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.804306 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.907274 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.907404 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.907416 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.907430 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.907439 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:56Z","lastTransitionTime":"2025-10-06T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.921040 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:56 crc kubenswrapper[4888]: E1006 15:01:56.921262 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.921335 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.921395 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:56 crc kubenswrapper[4888]: E1006 15:01:56.921502 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:56 crc kubenswrapper[4888]: I1006 15:01:56.921560 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:56 crc kubenswrapper[4888]: E1006 15:01:56.921636 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:56 crc kubenswrapper[4888]: E1006 15:01:56.921904 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.009732 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.009779 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.009822 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.009846 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.009860 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.112510 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.112572 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.112585 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.112601 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.112610 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.216048 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.216111 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.216127 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.216157 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.216171 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.320124 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.320416 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.320505 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.320602 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.320694 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.423453 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.423526 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.423547 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.423574 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.423593 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.526555 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.526609 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.526621 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.526640 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.526652 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.629609 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.629668 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.629680 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.629729 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.629746 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.732871 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.732925 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.732944 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.732969 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.732989 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.834880 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.834925 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.834938 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.834954 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.834965 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.878630 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.878695 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.878709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.878728 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.878739 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: E1006 15:01:57.891556 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:57Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.895376 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.895416 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.895431 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.895455 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.895474 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: E1006 15:01:57.907175 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:57Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.910481 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.910550 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.910562 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.910581 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.910594 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.921039 4888 scope.go:117] "RemoveContainer" containerID="22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70" Oct 06 15:01:57 crc kubenswrapper[4888]: E1006 15:01:57.921312 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:01:57 crc kubenswrapper[4888]: E1006 15:01:57.922249 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:57Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.924886 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.924917 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.924927 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.924944 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.924955 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: E1006 15:01:57.937003 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:57Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.939844 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.939879 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.939891 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.939909 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.939919 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:57 crc kubenswrapper[4888]: E1006 15:01:57.950556 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:57Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:57 crc kubenswrapper[4888]: E1006 15:01:57.950665 4888 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.953067 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.953202 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.953227 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.953254 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:57 crc kubenswrapper[4888]: I1006 15:01:57.953272 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:57Z","lastTransitionTime":"2025-10-06T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.055929 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.055966 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.055977 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.055992 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.056003 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.158854 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.159175 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.159262 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.159359 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.159452 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.204501 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.219721 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.234150 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.250929 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.261294 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.261341 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.261356 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.261373 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.261428 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.264852 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.278950 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.292924 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.306075 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.316511 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.329582 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.346468 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.358587 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.363471 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.363587 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.363653 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.363714 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.363777 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.371727 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.386551 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.397572 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.407099 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.417604 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.427051 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:01:58Z is after 2025-08-24T17:21:41Z" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.466203 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.466246 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.466255 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.466269 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.466280 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.568302 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.568335 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.568344 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.568357 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.568365 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.671057 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.671095 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.671103 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.671119 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.671128 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.773146 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.773194 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.773207 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.773227 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.773243 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.876059 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.876105 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.876126 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.876152 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.876169 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.921418 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.921447 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.921488 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:01:58 crc kubenswrapper[4888]: E1006 15:01:58.921595 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:01:58 crc kubenswrapper[4888]: E1006 15:01:58.921675 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:01:58 crc kubenswrapper[4888]: E1006 15:01:58.921752 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.921998 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:01:58 crc kubenswrapper[4888]: E1006 15:01:58.922179 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.978633 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.978670 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.978682 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.978702 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:58 crc kubenswrapper[4888]: I1006 15:01:58.978716 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:58Z","lastTransitionTime":"2025-10-06T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.082438 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.082529 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.082543 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.082568 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.082581 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.184952 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.184995 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.185006 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.185021 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.185037 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.288037 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.288097 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.288111 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.288127 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.288139 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.392029 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.392099 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.392116 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.392142 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.392159 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.495426 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.495499 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.495521 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.495552 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.495573 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.597854 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.597901 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.597912 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.597925 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.597933 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.700361 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.700468 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.700481 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.700505 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.700519 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.802972 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.803018 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.803033 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.803053 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.803067 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.906566 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.906617 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.906631 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.906653 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:01:59 crc kubenswrapper[4888]: I1006 15:01:59.906667 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:01:59Z","lastTransitionTime":"2025-10-06T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.010035 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.010938 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.010983 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.011016 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.011041 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.114585 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.114676 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.114696 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.114722 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.114741 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.218106 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.218171 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.218195 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.218226 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.218245 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.321183 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.321232 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.321243 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.321259 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.321269 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.423185 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.423233 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.423245 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.423265 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.423278 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.527112 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.527182 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.527210 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.527243 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.527269 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.629653 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.629722 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.629742 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.629831 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.629857 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.732786 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.732860 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.732874 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.732894 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.732907 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.835664 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.835731 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.835745 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.835765 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.835778 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.920372 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.920408 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:00 crc kubenswrapper[4888]: E1006 15:02:00.920715 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.920889 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.920903 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:00 crc kubenswrapper[4888]: E1006 15:02:00.921027 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:00 crc kubenswrapper[4888]: E1006 15:02:00.921127 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:00 crc kubenswrapper[4888]: E1006 15:02:00.921205 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.934692 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:00Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.938611 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.938648 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.938660 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.938677 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.938690 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:00Z","lastTransitionTime":"2025-10-06T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.952781 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:00Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.969907 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:00Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.982640 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:00Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:00 crc kubenswrapper[4888]: I1006 15:02:00.996215 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:00Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.009120 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.022235 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.036955 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.040743 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.040786 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.040822 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.040841 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.040852 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.049248 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.059930 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.071724 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.084263 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.095504 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.104904 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.115865 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.127568 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.142638 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.142683 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.142694 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.142709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.142721 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.145612 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:01Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.245196 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.245226 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.245233 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.245245 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.245254 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.348459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.348513 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.348533 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.348557 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.348574 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.450472 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.450502 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.450509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.450523 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.450532 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.552981 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.553016 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.553025 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.553040 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.553051 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.655046 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.655086 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.655100 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.655113 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.655122 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.757393 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.757433 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.757445 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.757463 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.757476 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.859090 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.859122 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.859130 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.859144 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.859152 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.961736 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.961774 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.961783 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.961815 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:01 crc kubenswrapper[4888]: I1006 15:02:01.961825 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:01Z","lastTransitionTime":"2025-10-06T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.063755 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.063857 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.063875 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.063898 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.063914 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.166081 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.166145 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.166154 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.166170 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.166181 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.269071 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.269138 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.269156 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.269184 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.269205 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.372080 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.372129 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.372161 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.372184 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.372199 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.474246 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.474593 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.474702 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.474838 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.474956 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.576904 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.576958 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.576974 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.576996 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.577011 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.679618 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.679658 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.679667 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.679680 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.679689 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.781906 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.781954 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.781970 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.781995 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.782012 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.884586 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.884896 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.884999 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.885071 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.885148 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.921016 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:02 crc kubenswrapper[4888]: E1006 15:02:02.921198 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.921314 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.921338 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:02 crc kubenswrapper[4888]: E1006 15:02:02.921521 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:02 crc kubenswrapper[4888]: E1006 15:02:02.921591 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.921353 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:02 crc kubenswrapper[4888]: E1006 15:02:02.921694 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.987572 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.987637 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.987662 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.987692 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:02 crc kubenswrapper[4888]: I1006 15:02:02.987715 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:02Z","lastTransitionTime":"2025-10-06T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.090281 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.090310 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.090318 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.090330 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.090340 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.192720 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.192765 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.192776 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.192813 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.192853 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.295400 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.295441 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.295451 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.295465 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.295476 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.397625 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.397671 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.397686 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.397703 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.397715 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.499658 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.499727 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.499743 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.499756 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.499765 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.601664 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.601705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.601714 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.601731 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.601740 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.704134 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.704168 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.704177 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.704191 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.704200 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.806993 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.807044 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.807060 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.807083 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.807095 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.909116 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.909175 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.909189 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.909206 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:03 crc kubenswrapper[4888]: I1006 15:02:03.909216 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:03Z","lastTransitionTime":"2025-10-06T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.012096 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.012153 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.012183 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.012206 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.012223 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.115150 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.115224 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.115240 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.115260 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.115275 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.218536 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.218598 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.218615 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.218642 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.218660 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.321584 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.321694 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.321711 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.321727 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.321737 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.424100 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.424376 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.424443 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.424517 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.424577 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.527189 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.527967 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.528008 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.528030 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.528043 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.630364 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.630411 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.630422 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.630443 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.630455 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.732975 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.733892 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.733905 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.733917 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.733931 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.836308 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.836336 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.836344 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.836357 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.836366 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.920643 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:04 crc kubenswrapper[4888]: E1006 15:02:04.920886 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.920949 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.920992 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:04 crc kubenswrapper[4888]: E1006 15:02:04.921076 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.921126 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:04 crc kubenswrapper[4888]: E1006 15:02:04.921206 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:04 crc kubenswrapper[4888]: E1006 15:02:04.921643 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.938200 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.938257 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.938271 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.938317 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:04 crc kubenswrapper[4888]: I1006 15:02:04.938329 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:04Z","lastTransitionTime":"2025-10-06T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.041347 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.041392 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.041401 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.041417 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.041431 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.143941 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.143984 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.143993 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.144008 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.144017 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.246404 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.246464 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.246477 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.246496 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.246509 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.348618 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.348647 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.348655 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.348669 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.348678 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.451474 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.451516 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.451528 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.451545 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.451559 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.554265 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.554588 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.554717 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.554879 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.555043 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.658069 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.658110 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.658121 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.658137 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.658149 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.760468 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.760500 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.760509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.760522 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.760531 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.863458 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.863508 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.863520 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.863540 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.863551 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.965517 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.965578 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.965591 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.965606 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:05 crc kubenswrapper[4888]: I1006 15:02:05.965619 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:05Z","lastTransitionTime":"2025-10-06T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.068277 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.068333 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.068349 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.068370 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.068384 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.171580 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.172279 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.172417 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.172544 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.172677 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.274621 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.274652 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.274662 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.274678 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.274689 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.376817 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.376864 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.376872 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.376887 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.376896 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.479307 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.479350 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.479364 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.479381 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.479394 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.581580 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.581615 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.581623 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.581640 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.581649 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.683529 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.683576 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.683586 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.683612 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.683625 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.786319 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.786362 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.786375 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.786397 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.786411 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.888433 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.888490 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.888508 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.888531 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.888548 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.921011 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.921087 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:06 crc kubenswrapper[4888]: E1006 15:02:06.921177 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.921224 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.921024 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:06 crc kubenswrapper[4888]: E1006 15:02:06.921361 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:06 crc kubenswrapper[4888]: E1006 15:02:06.921483 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:06 crc kubenswrapper[4888]: E1006 15:02:06.921554 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.936572 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:06 crc kubenswrapper[4888]: E1006 15:02:06.936769 4888 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:02:06 crc kubenswrapper[4888]: E1006 15:02:06.936867 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs podName:2aee40f4-3a30-43cb-aa49-aabcf3c074b7 nodeName:}" failed. No retries permitted until 2025-10-06 15:02:38.936839515 +0000 UTC m=+98.749190283 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs") pod "network-metrics-daemon-hm59m" (UID: "2aee40f4-3a30-43cb-aa49-aabcf3c074b7") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.990452 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.990493 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.990501 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.990513 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:06 crc kubenswrapper[4888]: I1006 15:02:06.990521 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:06Z","lastTransitionTime":"2025-10-06T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.093031 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.093261 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.093352 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.093473 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.093560 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.195387 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.195924 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.196118 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.196275 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.196412 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.298716 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.298746 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.298754 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.298768 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.298781 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.401387 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.401437 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.401450 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.401470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.401483 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.503963 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.504282 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.504366 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.504460 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.504530 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.606787 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.606848 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.606863 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.606883 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.606903 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.709383 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.709430 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.709443 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.709459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.709472 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.811364 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.811412 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.811426 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.811445 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.811457 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.914170 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.914217 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.914232 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.914251 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:07 crc kubenswrapper[4888]: I1006 15:02:07.914264 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:07Z","lastTransitionTime":"2025-10-06T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.017174 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.017223 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.017239 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.017260 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.017277 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.119209 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.119251 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.119266 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.119284 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.119299 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.221696 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.221724 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.221731 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.221748 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.221757 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.240746 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.240883 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.240953 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.241023 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.241084 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.255681 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:08Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.259184 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.259224 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.259234 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.259250 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.259261 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.271112 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:08Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.273843 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.273878 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.273889 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.273906 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.273917 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.285511 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:08Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.288500 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.288536 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.288546 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.288560 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.288570 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.297938 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:08Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.300391 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.300423 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.300435 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.300452 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.300462 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.310753 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:08Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.310916 4888 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.323965 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.323993 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.324003 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.324019 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.324031 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.426599 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.426634 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.426644 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.426660 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.426671 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.528620 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.528718 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.528742 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.528774 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.528852 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.631072 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.631126 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.631138 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.631159 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.631171 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.733755 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.733818 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.733830 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.733848 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.733861 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.835638 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.835690 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.835702 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.835722 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.835735 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.920348 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.920464 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.920539 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.920566 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.920611 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.920629 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.920677 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:08 crc kubenswrapper[4888]: E1006 15:02:08.920718 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.938496 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.938521 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.938529 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.938540 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:08 crc kubenswrapper[4888]: I1006 15:02:08.938548 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:08Z","lastTransitionTime":"2025-10-06T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.040436 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.040687 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.040769 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.040895 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.040983 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.143706 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.143740 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.143751 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.143769 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.143782 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.246244 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.246290 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.246303 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.246321 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.246340 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.336710 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hw8s9_a8a92e6a-76c9-4370-b509-56d6e41f99de/kube-multus/0.log" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.336854 4888 generic.go:334] "Generic (PLEG): container finished" podID="a8a92e6a-76c9-4370-b509-56d6e41f99de" containerID="fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8" exitCode=1 Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.336866 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hw8s9" event={"ID":"a8a92e6a-76c9-4370-b509-56d6e41f99de","Type":"ContainerDied","Data":"fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.337270 4888 scope.go:117] "RemoveContainer" containerID="fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.349355 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.349398 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.349410 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.349428 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.349439 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.351680 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.363898 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.375814 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.386926 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.398514 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.415449 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.425762 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.436394 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.447088 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.457151 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.457187 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.457200 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.457217 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.457228 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.461427 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"2025-10-06T15:01:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8\\\\n2025-10-06T15:01:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8 to /host/opt/cni/bin/\\\\n2025-10-06T15:01:23Z [verbose] multus-daemon started\\\\n2025-10-06T15:01:23Z [verbose] Readiness Indicator file check\\\\n2025-10-06T15:02:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.481300 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.494894 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.507503 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.520553 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.530866 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.542286 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.554269 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:09Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.559024 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.559062 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.559072 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.559086 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.559094 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.661469 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.661762 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.661889 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.661982 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.662076 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.763814 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.764084 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.764171 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.764267 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.764356 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.866609 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.866643 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.866654 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.866672 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.866683 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.921369 4888 scope.go:117] "RemoveContainer" containerID="22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.969225 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.969256 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.969267 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.969283 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:09 crc kubenswrapper[4888]: I1006 15:02:09.969294 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:09Z","lastTransitionTime":"2025-10-06T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.071970 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.072001 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.072013 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.072029 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.072041 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.174270 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.174303 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.174313 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.174329 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.174340 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.276563 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.276590 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.276598 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.276611 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.276619 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.342311 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/2.log" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.344783 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.345680 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.347585 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hw8s9_a8a92e6a-76c9-4370-b509-56d6e41f99de/kube-multus/0.log" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.347638 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hw8s9" event={"ID":"a8a92e6a-76c9-4370-b509-56d6e41f99de","Type":"ContainerStarted","Data":"4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.360014 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.376527 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.379152 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.379218 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.379229 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.379246 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.379258 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.388724 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.402679 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.422305 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.436305 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.451732 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.463754 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.473847 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.483314 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.483356 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.483366 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.483382 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.483393 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.487968 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.501507 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"2025-10-06T15:01:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8\\\\n2025-10-06T15:01:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8 to /host/opt/cni/bin/\\\\n2025-10-06T15:01:23Z [verbose] multus-daemon started\\\\n2025-10-06T15:01:23Z [verbose] Readiness Indicator file check\\\\n2025-10-06T15:02:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.515182 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.528655 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.544656 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.558175 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.568565 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.579953 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.584789 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.584917 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.584983 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.585061 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.585119 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.592376 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.604588 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.613449 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.625820 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.637540 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.648609 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.662970 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.678512 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.688564 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.688608 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.688620 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.688635 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.688647 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.693059 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.705170 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"2025-10-06T15:01:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8\\\\n2025-10-06T15:01:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8 to /host/opt/cni/bin/\\\\n2025-10-06T15:01:23Z [verbose] multus-daemon started\\\\n2025-10-06T15:01:23Z [verbose] Readiness Indicator file check\\\\n2025-10-06T15:02:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.723519 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.737903 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.748357 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.762497 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.772107 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.785448 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.790870 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.790915 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.790928 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.790946 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.790958 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.797206 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.892938 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.893194 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.893481 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.893572 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.893660 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:10Z","lastTransitionTime":"2025-10-06T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.920529 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.920575 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.920634 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.920529 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:10 crc kubenswrapper[4888]: E1006 15:02:10.920714 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:10 crc kubenswrapper[4888]: E1006 15:02:10.920839 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:10 crc kubenswrapper[4888]: E1006 15:02:10.920912 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:10 crc kubenswrapper[4888]: E1006 15:02:10.920970 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.935811 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.946613 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.956978 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.971998 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.985982 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:10Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.999264 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.999759 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:10 crc kubenswrapper[4888]: I1006 15:02:10.999918 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.000030 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.000130 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.002995 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.014658 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.024395 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.034769 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.045726 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"2025-10-06T15:01:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8\\\\n2025-10-06T15:01:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8 to /host/opt/cni/bin/\\\\n2025-10-06T15:01:23Z [verbose] multus-daemon started\\\\n2025-10-06T15:01:23Z [verbose] Readiness Indicator file check\\\\n2025-10-06T15:02:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.063251 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.074174 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.084891 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.098340 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.102588 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.102770 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.102940 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.103086 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.103191 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.108147 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.116598 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.126936 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.205961 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.205993 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.206003 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.206016 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.206024 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.308924 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.308962 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.308970 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.308986 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.308995 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.352211 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/3.log" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.352737 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/2.log" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.355537 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" exitCode=1 Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.355563 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.355748 4888 scope.go:117] "RemoveContainer" containerID="22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.356248 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:02:11 crc kubenswrapper[4888]: E1006 15:02:11.356563 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.368654 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.381641 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.394775 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.405049 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.411973 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.412015 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.412025 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.412043 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.412055 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.415088 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.427419 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.440417 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.450628 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.460356 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.470572 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.482748 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"2025-10-06T15:01:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8\\\\n2025-10-06T15:01:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8 to /host/opt/cni/bin/\\\\n2025-10-06T15:01:23Z [verbose] multus-daemon started\\\\n2025-10-06T15:01:23Z [verbose] Readiness Indicator file check\\\\n2025-10-06T15:02:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.499460 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22c65419dab1e7586621a8db34fd8390615f611180c966cb84c1b78066249b70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:01:45Z\\\",\\\"message\\\":\\\"1:45.712441 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712217 6441 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1006 15:01:45.712450 6441 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:01:45.712456 6441 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:01:45.712462 6441 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-spjkk\\\\nI1006 15:01:45.712442 6441 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1006 15:01:45.712474 6441 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI1006 15:01:45.712512 6441 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF1006 15:01:45.712516 6441 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"message\\\":\\\"ator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:02:10.670659 6764 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:02:10.670664 6764 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1006 15:02:10.670670 6764 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1006 15:02:10.670677 6764 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1006 15:02:10.670683 6764 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1006 15:02:10.670687 6764 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1006 15:02:10.670692 6764 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1006 15:02:10.670582 6764 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.512157 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.513909 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.513935 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.513945 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.513962 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.513973 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.531465 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.544969 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.554913 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.566728 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:11Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.620104 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.620215 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.620230 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.620253 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.620265 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.722444 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.722479 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.722489 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.722532 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.722542 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.824706 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.824752 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.824764 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.824782 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.824816 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.926894 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.927104 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.927163 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.927276 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:11 crc kubenswrapper[4888]: I1006 15:02:11.927366 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:11Z","lastTransitionTime":"2025-10-06T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.029227 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.029453 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.029692 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.029922 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.030116 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.132281 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.132314 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.132321 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.132345 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.132355 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.234459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.234711 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.234820 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.234927 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.235014 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.338540 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.339500 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.339575 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.339659 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.339732 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.360479 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/3.log" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.364689 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:02:12 crc kubenswrapper[4888]: E1006 15:02:12.365001 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.377006 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.391041 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.403493 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.420417 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.441258 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.441991 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.442021 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.442031 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.442046 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.442056 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.455544 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.468150 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.480716 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.494729 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.506336 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.517684 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.528765 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.542859 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"2025-10-06T15:01:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8\\\\n2025-10-06T15:01:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8 to /host/opt/cni/bin/\\\\n2025-10-06T15:01:23Z [verbose] multus-daemon started\\\\n2025-10-06T15:01:23Z [verbose] Readiness Indicator file check\\\\n2025-10-06T15:02:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.544198 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.544256 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.544269 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.544286 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.544299 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.565293 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"message\\\":\\\"ator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:02:10.670659 6764 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:02:10.670664 6764 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1006 15:02:10.670670 6764 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1006 15:02:10.670677 6764 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1006 15:02:10.670683 6764 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1006 15:02:10.670687 6764 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1006 15:02:10.670692 6764 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1006 15:02:10.670582 6764 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.581110 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.596430 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.615074 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:12Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.647316 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.647355 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.647366 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.647381 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.647391 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.749636 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.749680 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.749690 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.749707 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.749719 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.852208 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.852263 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.852276 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.852293 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.852688 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.921482 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:12 crc kubenswrapper[4888]: E1006 15:02:12.921615 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.921844 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.921860 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:12 crc kubenswrapper[4888]: E1006 15:02:12.921931 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.921938 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:12 crc kubenswrapper[4888]: E1006 15:02:12.922001 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:12 crc kubenswrapper[4888]: E1006 15:02:12.922048 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.955143 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.955190 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.955201 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.955217 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:12 crc kubenswrapper[4888]: I1006 15:02:12.955230 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:12Z","lastTransitionTime":"2025-10-06T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.057766 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.057853 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.057866 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.057880 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.057893 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.159757 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.159822 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.159841 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.159863 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.159878 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.262477 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.262514 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.262525 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.262543 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.262557 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.364434 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.364492 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.364502 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.364516 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.364526 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.467725 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.467942 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.467957 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.468007 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.468050 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.570827 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.571257 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.571460 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.571654 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.571921 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.675088 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.675350 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.675493 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.675591 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.675698 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.778535 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.778580 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.778591 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.778608 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.778620 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.881155 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.881440 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.881599 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.881812 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.881979 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.985056 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.985105 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.985120 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.985138 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:13 crc kubenswrapper[4888]: I1006 15:02:13.985153 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:13Z","lastTransitionTime":"2025-10-06T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.088588 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.089119 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.089357 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.089541 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.089731 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.194669 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.194720 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.194735 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.194753 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.194766 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.298131 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.298185 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.298203 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.298238 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.298259 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.400950 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.401001 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.401013 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.401029 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.401361 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.503427 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.503463 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.503475 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.503490 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.503502 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.605555 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.605592 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.605602 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.605617 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.605627 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.708032 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.708078 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.708096 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.708117 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.708133 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.810755 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.810785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.810813 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.810828 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.810836 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.913586 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.913626 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.913642 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.913663 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.913677 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:14Z","lastTransitionTime":"2025-10-06T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.922556 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:14 crc kubenswrapper[4888]: E1006 15:02:14.922665 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.922834 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:14 crc kubenswrapper[4888]: E1006 15:02:14.922903 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.923031 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:14 crc kubenswrapper[4888]: E1006 15:02:14.923081 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:14 crc kubenswrapper[4888]: I1006 15:02:14.923174 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:14 crc kubenswrapper[4888]: E1006 15:02:14.923221 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.015748 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.015776 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.015785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.015815 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.015824 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.118780 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.118854 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.118868 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.118921 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.118935 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.221631 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.221661 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.221669 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.221682 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.221693 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.324746 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.324790 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.324818 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.324838 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.324849 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.427544 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.427591 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.427606 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.427625 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.427636 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.530910 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.530980 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.531010 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.531043 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.531068 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.633696 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.634015 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.634088 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.634159 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.634233 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.736679 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.736714 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.736723 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.736758 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.736769 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.839144 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.839380 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.839446 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.839519 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.839577 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.942666 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.942709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.942718 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.942731 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:15 crc kubenswrapper[4888]: I1006 15:02:15.942740 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:15Z","lastTransitionTime":"2025-10-06T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.046153 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.046241 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.046270 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.046305 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.046328 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.149557 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.149633 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.149653 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.149681 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.149704 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.253415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.253657 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.253669 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.253687 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.253700 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.357334 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.357401 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.357415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.357434 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.357449 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.460372 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.460431 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.460449 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.460473 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.460489 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.563005 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.563090 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.563103 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.563119 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.563159 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.666196 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.666252 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.666263 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.666298 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.666308 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.770154 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.770219 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.770235 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.770252 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.770265 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.872582 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.872656 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.872674 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.872700 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.872718 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.921344 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.921380 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:16 crc kubenswrapper[4888]: E1006 15:02:16.921606 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.921650 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.921624 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:16 crc kubenswrapper[4888]: E1006 15:02:16.922053 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:16 crc kubenswrapper[4888]: E1006 15:02:16.922178 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:16 crc kubenswrapper[4888]: E1006 15:02:16.922245 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.975572 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.975633 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.975669 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.975703 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:16 crc kubenswrapper[4888]: I1006 15:02:16.975726 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:16Z","lastTransitionTime":"2025-10-06T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.078953 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.078998 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.079013 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.079035 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.079051 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.181617 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.181655 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.181666 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.181682 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.181691 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.284235 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.284291 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.284301 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.284316 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.284326 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.386121 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.386162 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.386172 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.386187 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.386197 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.487896 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.487943 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.487952 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.487968 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.487977 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.591091 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.591137 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.591147 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.591171 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.591183 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.693337 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.693393 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.693409 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.693435 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.693454 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.796238 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.796295 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.796313 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.796338 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.796355 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.898750 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.898816 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.898829 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.898850 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:17 crc kubenswrapper[4888]: I1006 15:02:17.898860 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:17Z","lastTransitionTime":"2025-10-06T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.001863 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.001905 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.001916 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.001933 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.001944 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.104637 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.104679 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.104688 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.104700 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.104708 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.207494 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.207540 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.207552 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.207570 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.207582 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.310989 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.311070 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.311104 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.311185 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.311203 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.414375 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.414503 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.414525 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.414570 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.414601 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.517970 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.518019 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.518028 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.518042 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.518052 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.620365 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.620440 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.620478 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.620509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.620533 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.645961 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.646003 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.646013 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.646030 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.646042 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.666167 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:18Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.671602 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.671807 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.671954 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.672093 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.672420 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.687378 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:18Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.691186 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.691239 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.691254 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.691276 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.691293 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.710626 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:18Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.713857 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.713923 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.713942 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.713973 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.713995 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.731769 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:18Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.736514 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.736757 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.736908 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.737020 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.737110 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.750306 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:18Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.750423 4888 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.752152 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.752199 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.752208 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.752223 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.752233 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.854395 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.854455 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.854471 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.854487 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.854501 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.920476 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.920531 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.920577 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.920705 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.920744 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.920874 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.921025 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:18 crc kubenswrapper[4888]: E1006 15:02:18.921311 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.933566 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.957441 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.957487 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.957501 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.957517 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:18 crc kubenswrapper[4888]: I1006 15:02:18.957529 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:18Z","lastTransitionTime":"2025-10-06T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.059391 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.059433 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.059443 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.059459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.059472 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.162544 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.162607 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.162624 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.162677 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.162740 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.264618 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.264681 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.264701 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.264727 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.264747 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.367450 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.367482 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.367491 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.367514 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.367523 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.469928 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.469978 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.469990 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.470007 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.470018 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.572219 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.572289 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.572303 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.572320 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.572331 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.674793 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.674866 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.674878 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.674896 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.674907 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.777100 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.777336 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.777502 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.777711 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.777938 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.880564 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.880603 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.880617 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.880633 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.880643 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.982900 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.983281 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.983435 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.983598 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:19 crc kubenswrapper[4888]: I1006 15:02:19.983742 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:19Z","lastTransitionTime":"2025-10-06T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.087087 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.087147 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.087173 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.087204 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.087229 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.190623 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.190693 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.190714 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.190743 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.190766 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.294829 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.295226 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.295254 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.295284 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.295308 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.396921 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.396975 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.396993 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.397018 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.397037 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.501993 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.502089 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.502107 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.502132 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.502156 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.605092 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.605148 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.605166 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.605189 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.605208 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.707324 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.707386 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.707404 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.707427 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.707446 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.809459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.809502 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.809513 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.809529 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.809541 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.911344 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.911391 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.911403 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.911423 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.911437 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:20Z","lastTransitionTime":"2025-10-06T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.920856 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.920910 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.921051 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.921136 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:20 crc kubenswrapper[4888]: E1006 15:02:20.921134 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:20 crc kubenswrapper[4888]: E1006 15:02:20.921315 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:20 crc kubenswrapper[4888]: E1006 15:02:20.921443 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:20 crc kubenswrapper[4888]: E1006 15:02:20.921995 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.935347 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:20Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.946919 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:20Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.958029 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:20Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.967623 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:20Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.978769 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:20Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:20 crc kubenswrapper[4888]: I1006 15:02:20.991045 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:20Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.000672 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:20Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.009371 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.013282 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.013319 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.013329 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.013345 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.013354 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.020132 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.032586 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"2025-10-06T15:01:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8\\\\n2025-10-06T15:01:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8 to /host/opt/cni/bin/\\\\n2025-10-06T15:01:23Z [verbose] multus-daemon started\\\\n2025-10-06T15:01:23Z [verbose] Readiness Indicator file check\\\\n2025-10-06T15:02:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.050985 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"message\\\":\\\"ator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:02:10.670659 6764 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:02:10.670664 6764 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1006 15:02:10.670670 6764 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1006 15:02:10.670677 6764 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1006 15:02:10.670683 6764 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1006 15:02:10.670687 6764 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1006 15:02:10.670692 6764 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1006 15:02:10.670582 6764 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.060562 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0f6368-26a5-414e-b60a-57bb98a18acf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fae7e669c7a8b9c64247ae096f7903bde47dec2a619368865ddd801e54bf4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7794b5a198fa819961d77fe83dba20bd4bf89b342fb6030c2897109f8865f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c7794b5a198fa819961d77fe83dba20bd4bf89b342fb6030c2897109f8865f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.073007 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.085303 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.101879 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.111685 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.114997 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.115020 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.115029 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.115042 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.115051 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.121034 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.131750 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:21Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.218340 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.218392 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.218404 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.218420 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.218432 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.321074 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.321119 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.321134 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.321150 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.321160 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.423436 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.423489 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.423512 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.423542 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.423565 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.527079 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.527484 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.527684 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.527927 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.528082 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.631185 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.631220 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.631231 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.631247 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.631258 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.733026 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.733060 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.733068 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.733082 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.733091 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.835424 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.835470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.835484 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.835506 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.835522 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.938262 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.938364 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.938395 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.938415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:21 crc kubenswrapper[4888]: I1006 15:02:21.938430 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:21Z","lastTransitionTime":"2025-10-06T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.040709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.040749 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.040760 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.040776 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.040787 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.143262 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.143312 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.143322 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.143335 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.143345 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.245966 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.246002 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.246011 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.246028 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.246040 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.348048 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.348089 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.348098 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.348123 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.348132 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.449999 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.450049 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.450061 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.450077 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.450090 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.553076 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.553144 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.553165 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.553196 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.553217 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.655750 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.655874 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.655894 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.655922 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.655940 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.757955 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.758018 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.758037 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.758059 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.758076 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.861282 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.861562 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.861643 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.861708 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.861771 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.920419 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.920487 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:22 crc kubenswrapper[4888]: E1006 15:02:22.920875 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.920531 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.920502 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:22 crc kubenswrapper[4888]: E1006 15:02:22.920979 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:22 crc kubenswrapper[4888]: E1006 15:02:22.921096 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:22 crc kubenswrapper[4888]: E1006 15:02:22.920790 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.965137 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.965198 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.965210 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.965229 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:22 crc kubenswrapper[4888]: I1006 15:02:22.965241 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:22Z","lastTransitionTime":"2025-10-06T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.067432 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.067470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.067478 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.067494 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.067503 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.169919 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.169973 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.169983 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.170003 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.170016 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.273377 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.273443 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.273455 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.273470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.273480 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.375828 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.375868 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.375881 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.375899 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.375911 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.478671 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.478724 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.478743 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.478769 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.478790 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.581408 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.581485 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.581509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.581541 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.581564 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.684786 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.684939 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.684975 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.685007 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.685028 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.787695 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.787769 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.787792 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.787868 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.787894 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.890891 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.890920 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.890929 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.890942 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.890952 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.994335 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.994396 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.994411 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.994432 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:23 crc kubenswrapper[4888]: I1006 15:02:23.994446 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:23Z","lastTransitionTime":"2025-10-06T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.097368 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.097430 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.097443 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.097459 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.097492 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.199767 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.200072 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.200225 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.200359 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.200452 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.303421 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.303468 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.303480 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.303498 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.303511 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.406106 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.406150 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.406161 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.406181 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.406192 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.508427 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.508703 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.508789 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.508881 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.508949 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.611183 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.611228 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.611240 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.611258 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.611270 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.713335 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.713396 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.713415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.713433 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.713445 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.719724 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.719906 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.719926 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:28.7198994 +0000 UTC m=+148.532250118 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.719973 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.720025 4888 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.720061 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:03:28.720055124 +0000 UTC m=+148.532405842 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.720112 4888 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.720188 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 15:03:28.720164737 +0000 UTC m=+148.532515525 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.816168 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.816211 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.816219 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.816234 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.816246 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.820675 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.820725 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.820881 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.820896 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.820906 4888 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.820945 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 15:03:28.820929491 +0000 UTC m=+148.633280209 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.820968 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.821007 4888 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.821031 4888 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.821097 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 15:03:28.821075155 +0000 UTC m=+148.633425923 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.919305 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.919357 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.919368 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.919385 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.919397 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:24Z","lastTransitionTime":"2025-10-06T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.920514 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.920513 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.920572 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.920674 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:24 crc kubenswrapper[4888]: I1006 15:02:24.920736 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.920825 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.920932 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:24 crc kubenswrapper[4888]: E1006 15:02:24.921000 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.021506 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.021539 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.021549 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.021565 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.021576 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.124465 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.124534 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.124547 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.124564 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.124575 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.227709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.227749 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.227760 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.227776 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.227787 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.329978 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.330069 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.330108 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.330142 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.330167 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.432981 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.433360 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.433376 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.433394 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.433407 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.535670 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.535729 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.535743 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.535760 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.535772 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.639013 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.639077 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.639103 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.639133 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.639156 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.742079 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.742146 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.742170 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.742195 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.742208 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.845684 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.845754 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.845772 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.845835 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.845856 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.949649 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.949760 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.949785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.949857 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:25 crc kubenswrapper[4888]: I1006 15:02:25.949883 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:25Z","lastTransitionTime":"2025-10-06T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.053192 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.053244 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.053267 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.053291 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.053308 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.155898 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.155936 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.155944 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.155960 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.155969 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.258558 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.258602 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.258612 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.258628 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.258639 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.361020 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.361086 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.361119 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.361153 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.361177 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.464368 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.464435 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.464451 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.464790 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.464859 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.568007 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.568057 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.568071 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.568088 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.568099 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.670938 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.670996 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.671010 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.671028 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.671365 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.774498 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.774568 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.774590 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.774621 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.774644 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.878377 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.878468 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.878498 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.878532 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.878555 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.920564 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.920585 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.920709 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.920915 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:26 crc kubenswrapper[4888]: E1006 15:02:26.920907 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:26 crc kubenswrapper[4888]: E1006 15:02:26.921055 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:26 crc kubenswrapper[4888]: E1006 15:02:26.921213 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:26 crc kubenswrapper[4888]: E1006 15:02:26.921324 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.981948 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.982009 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.982026 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.982062 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:26 crc kubenswrapper[4888]: I1006 15:02:26.982082 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:26Z","lastTransitionTime":"2025-10-06T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.085425 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.085502 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.085521 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.085549 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.085573 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.188748 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.188908 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.189214 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.189524 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.189864 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.292812 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.292872 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.292883 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.292899 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.292909 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.397063 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.397111 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.397123 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.397140 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.397152 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.500152 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.500193 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.500206 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.500223 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.500236 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.603236 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.603278 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.603289 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.603305 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.603317 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.706280 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.706343 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.706367 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.706398 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.706420 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.809992 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.810074 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.810095 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.810120 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.810136 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.911914 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.911954 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.911965 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.911983 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.911994 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:27Z","lastTransitionTime":"2025-10-06T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:27 crc kubenswrapper[4888]: I1006 15:02:27.922026 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:02:27 crc kubenswrapper[4888]: E1006 15:02:27.922364 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.014239 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.014303 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.014322 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.014346 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.014365 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.117705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.117774 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.117828 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.117858 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.117883 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.220407 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.220447 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.220488 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.220501 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.220510 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.322688 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.322769 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.322794 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.322872 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.322898 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.427012 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.427065 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.427082 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.427105 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.427122 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.529480 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.529579 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.529666 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.529749 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.529857 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.633324 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.633374 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.633390 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.633414 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.633445 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.736927 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.737021 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.737037 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.737056 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.737106 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.840638 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.840681 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.840693 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.840709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.840722 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.882754 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.882846 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.882873 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.882900 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.882922 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.901693 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.906243 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.906302 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.906315 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.906335 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.906350 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.920002 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.920443 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.920494 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.920494 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.920573 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.920678 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.920797 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.920912 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.920994 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.924915 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.924946 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.924956 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.924973 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.924987 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.944246 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.948684 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.948758 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.948770 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.948786 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.948815 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.962128 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.965458 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.965509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.965526 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.965552 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.965567 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.981453 4888 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"be6bc275-7f5d-4ec6-b349-88bdcff88efc\\\",\\\"systemUUID\\\":\\\"f107361e-9ed9-4a24-a32e-a76cb5e92926\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:28Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:28 crc kubenswrapper[4888]: E1006 15:02:28.981599 4888 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.983104 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.983152 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.983164 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.983183 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:28 crc kubenswrapper[4888]: I1006 15:02:28.983195 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:28Z","lastTransitionTime":"2025-10-06T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.085674 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.085717 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.085728 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.085744 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.085757 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.188601 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.188670 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.188684 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.188701 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.188711 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.291800 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.291922 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.291944 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.291970 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.291990 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.395106 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.395179 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.395204 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.395235 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.395255 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.498203 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.498257 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.498272 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.498293 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.498310 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.601229 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.601569 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.601690 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.601849 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.602014 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.704887 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.705265 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.705380 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.705487 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.705582 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.808811 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.808873 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.808882 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.808901 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.808913 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.911696 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.911764 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.911785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.911868 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:29 crc kubenswrapper[4888]: I1006 15:02:29.911912 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:29Z","lastTransitionTime":"2025-10-06T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.015578 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.015645 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.015670 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.015701 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.015719 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.119402 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.119444 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.119470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.119492 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.119507 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.222649 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.222725 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.222750 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.222779 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.222844 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.325785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.326165 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.326340 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.326529 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.326696 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.429255 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.429525 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.429590 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.429664 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.429746 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.532699 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.532997 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.533132 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.533224 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.533311 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.637095 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.637171 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.637195 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.637228 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.637252 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.740023 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.740098 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.740121 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.740151 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.740175 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.842551 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.842612 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.842627 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.842650 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.842666 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.920708 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.920777 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.920725 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.920954 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:30 crc kubenswrapper[4888]: E1006 15:02:30.921085 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:30 crc kubenswrapper[4888]: E1006 15:02:30.921278 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:30 crc kubenswrapper[4888]: E1006 15:02:30.921349 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:30 crc kubenswrapper[4888]: E1006 15:02:30.921702 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.941957 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ade29ce0-3908-4dc8-af71-09bbb6b6bb8d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd498dd9216b311754fd0c370ddede762622c215146a608af0d3bd8451946555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9388cd3e7a6da0e0c984a12f3c71faeb8673c5a25868640547465323284734\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b997c0f07fba9573d263a518460c85b6fa73e74c3298e98ef4ae599a9921ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce4c1ce4e54a65b01d1cac36bd98b2330876ca57061d1978c1642930f945f131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e50a365dd1f4db6fcf5bef889cbe09e130724837f07c3e233771d20e083bf9a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"message\\\":\\\"nsecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1006 15:01:21.071314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1006 15:01:21.071317 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1006 15:01:21.073899 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1006 15:01:21.076049 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1006 15:01:21.079890 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1006 15:01:21.094309 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1006 15:01:21.079914 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094345 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1006 15:01:21.094436 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1006 15:01:21.094448 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1006 15:01:21.079929 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1006 15:01:21.094683 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1006 15:01:21.080262 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-493219260/tls.crt::/tmp/serving-cert-493219260/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1759762875\\\\\\\\\\\\\\\" (2025-10-06 15:01:14 +0000 UTC to 2025-11-05 15:01:15 +0000 UTC (now=2025-10-06 15:01:21.08023021 +0000 UTC))\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19298f7ecddf9216b71203d4f305169dff291aad70e90c1a6f4de2c778d98376\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c233a877575641310a776d5186e9547c011ccc7d4811c21c9a42d40cf923bee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.944018 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.944591 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.944735 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.944926 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.945057 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.945227 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:30Z","lastTransitionTime":"2025-10-06T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.957088 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af494223ae9f967cb6c7c9e6a03e4ef3f564c4c7f7a957153f89fa13719e1db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce9204606649df02559438437fe3c018392880db2e703553a77badc35d67832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.971696 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:30 crc kubenswrapper[4888]: I1006 15:02:30.985681 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hm59m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-62b7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hm59m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:30Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.002245 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a145d9af-9431-4196-bd66-a095e39bf3ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1bfc20a40bdec489df5002b5bf321e6b4df470e60b5451b97a9a4614c9af809\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9bnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-spjkk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.015651 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hw8s9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8a92e6a-76c9-4370-b509-56d6e41f99de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:08Z\\\",\\\"message\\\":\\\"2025-10-06T15:01:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8\\\\n2025-10-06T15:01:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e88bf8b9-bc67-4068-8b21-f06a30142ce8 to /host/opt/cni/bin/\\\\n2025-10-06T15:01:23Z [verbose] multus-daemon started\\\\n2025-10-06T15:01:23Z [verbose] Readiness Indicator file check\\\\n2025-10-06T15:02:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtlv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hw8s9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.040100 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61cf5a40-f739-4ffe-8544-34bcd92aadc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T15:02:10Z\\\",\\\"message\\\":\\\"ator/machine-config-daemon-spjkk in node crc\\\\nI1006 15:02:10.670659 6764 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-spjkk after 0 failed attempt(s)\\\\nI1006 15:02:10.670664 6764 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1006 15:02:10.670670 6764 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1006 15:02:10.670677 6764 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1006 15:02:10.670683 6764 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1006 15:02:10.670687 6764 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1006 15:02:10.670692 6764 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1006 15:02:10.670582 6764 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T15:02:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phx28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hzx2q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.047777 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.047879 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.047900 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.047948 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.047965 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.057492 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39a85b5599c758c2c54b10f16f4959c1f8774e5e0f72605a3d65170368810fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.070654 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.082537 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e9eca567c11bea05fd6760d0acd06f66f1ca2b1ffedebf8c0ddfdef148a824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.092684 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rwfbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d552ea8-3df5-49d4-9cf2-25e2147ff628\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11ad61f41658ecfecfa3f0b12bcaa36cc2252a77d7ed2385685f27d1dbf81c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ph2kk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rwfbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.103456 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bdba58e-334c-4ef0-8498-d233789c62b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://607b684cd63a829bce94658f8061ec19cf172b3de6e1f7b13e6a682867fe2511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be00c8eac364280cfde99b487b9afd0379f38a4dda083c9976e501d02f65e3b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7vb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-p4wzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.112960 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa0f6368-26a5-414e-b60a-57bb98a18acf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fae7e669c7a8b9c64247ae096f7903bde47dec2a619368865ddd801e54bf4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7794b5a198fa819961d77fe83dba20bd4bf89b342fb6030c2897109f8865f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c7794b5a198fa819961d77fe83dba20bd4bf89b342fb6030c2897109f8865f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.124022 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e952768b-c228-4e82-8d8c-ccc363d03104\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef846c0da25df6fb8a7eddbe9b772d036dbe73f9edf3cf12f6924fe230201da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcdbe4c1649f559f5b55bb2797c637941d9b7652956e3af27a1d1a8e098c11d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9c07c41f80875b1719fca93f906f5eab29a9558f327c19229edae730349329\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623e242175696abff5469e56d82ec4a7c73a87ddc100c33e7ac2996eb922196f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.134673 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.147900 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dk65d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22b737e9-61a2-4561-9dfe-6edb6ca1f976\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a8675712cbe169cbd7aa72aac4bbad486eb366000ec3592ef83581937abb82e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4bd7cdc5be81aad90a7a4a13c4548e6e289d2dbdde14fe397e7c1aca9773a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f77e5dd3ad9859d44b5166ce0622414a4c295d3cd600109ff22d837864642791\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193f7eb75ddf9ad4caeeec8075e298e1a17b6b0bc051eb7ad6b4fc308150d7c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://912a3863fda7a713011fd8fa9c814ba3a6eaafaeabef5958e4ab54e84df28525\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84ca194ca276784830cd9353fa7ce140f254071a61e74b6f1187a661a998bbc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d7a8c5450ecf9f1740b70b82ad516da478312b49904ebc9336a6fd1782c3fe3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfv5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dk65d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.149893 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.149923 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.149934 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.149951 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.149962 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.157643 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-h2xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b81ef7f-121c-47c3-a360-af9e56447038\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c186306c5bae3e8bb9e003395f42b56695d1c18dda6a926d5bb8fea029a8d33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft78d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:23Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-h2xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.167010 4888 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36c1eb30-6ced-46c5-91c5-95a931bfc2eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://723678c88214fc491c9f4ccbef6edb30aa659fa9420e676bfe71ea068c24b61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d931c3d197f4fdbc7658b3cc758073a4d2a864a44dde544c19247af1182415d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d9b275373213ab960807fc270a22dd7e7358b03717db13359a12ed85a5b7698\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T15:01:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60b81618c790b3c4fd277f032f3b9e8347a33b9e171b27683ef4217301741b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T15:01:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T15:01:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T15:01:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T15:02:31Z is after 2025-08-24T17:21:41Z" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.251936 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.252031 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.252049 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.252073 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.252086 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.354692 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.354767 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.354782 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.354818 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.354838 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.457834 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.457883 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.457892 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.457907 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.457918 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.560127 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.560190 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.560202 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.560219 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.560231 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.663180 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.663249 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.663268 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.663293 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.663313 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.766166 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.766234 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.766254 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.766276 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.766292 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.868686 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.868742 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.868757 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.868778 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.868790 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.971159 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.971213 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.971229 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.971250 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:31 crc kubenswrapper[4888]: I1006 15:02:31.971263 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:31Z","lastTransitionTime":"2025-10-06T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.074617 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.074665 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.074673 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.074690 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.074700 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.177412 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.177460 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.177474 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.177488 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.177498 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.279513 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.279774 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.279891 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.279999 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.280082 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.383768 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.383890 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.383912 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.383945 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.383968 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.486668 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.486697 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.486706 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.486719 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.486728 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.589136 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.589170 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.589178 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.589194 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.589204 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.692033 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.692114 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.692123 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.692137 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.692146 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.794458 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.794506 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.794517 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.794535 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.794547 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.897373 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.897413 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.897422 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.897435 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.897447 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:32Z","lastTransitionTime":"2025-10-06T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.921306 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.921480 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.921525 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:32 crc kubenswrapper[4888]: E1006 15:02:32.921666 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:32 crc kubenswrapper[4888]: E1006 15:02:32.921784 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:32 crc kubenswrapper[4888]: I1006 15:02:32.921848 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:32 crc kubenswrapper[4888]: E1006 15:02:32.921932 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:32 crc kubenswrapper[4888]: E1006 15:02:32.921995 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.000252 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.000289 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.000297 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.000312 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.000324 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.103205 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.103289 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.103320 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.103354 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.103377 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.206479 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.206553 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.206573 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.206598 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.206617 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.309306 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.309359 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.309371 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.309396 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.309410 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.412287 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.412325 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.412333 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.412345 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.412354 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.515599 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.515644 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.515655 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.515672 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.515685 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.623263 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.623325 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.623344 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.623366 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.623384 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.725758 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.725832 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.725849 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.725870 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.725894 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.828692 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.828754 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.828773 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.828835 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.828877 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.932380 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.932509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.932531 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.932564 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:33 crc kubenswrapper[4888]: I1006 15:02:33.932599 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:33Z","lastTransitionTime":"2025-10-06T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.035310 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.035378 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.035403 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.035432 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.035457 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.138760 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.138838 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.138848 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.138867 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.138878 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.241864 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.241904 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.241918 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.241937 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.241951 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.344726 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.344774 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.344785 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.344834 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.344851 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.447464 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.447498 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.447508 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.447521 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.447530 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.549666 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.549794 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.549833 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.549851 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.549863 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.652245 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.652296 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.652312 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.652333 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.652349 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.754227 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.754288 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.754306 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.754335 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.754354 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.856750 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.856787 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.856820 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.856840 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.856851 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.920312 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:34 crc kubenswrapper[4888]: E1006 15:02:34.920744 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.920848 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.920896 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.920922 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:34 crc kubenswrapper[4888]: E1006 15:02:34.920903 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:34 crc kubenswrapper[4888]: E1006 15:02:34.921018 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:34 crc kubenswrapper[4888]: E1006 15:02:34.921057 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.959705 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.959763 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.959782 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.959832 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:34 crc kubenswrapper[4888]: I1006 15:02:34.959849 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:34Z","lastTransitionTime":"2025-10-06T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.063158 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.063238 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.063280 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.063310 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.063335 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.166633 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.166702 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.166723 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.166752 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.166773 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.270201 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.270255 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.270273 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.270301 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.270318 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.373546 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.373580 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.373588 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.373618 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.373629 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.476623 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.476684 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.476708 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.476730 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.476748 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.579548 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.579596 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.579612 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.579631 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.579649 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.682849 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.682908 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.682922 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.682941 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.682952 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.785822 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.785863 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.785911 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.785936 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.785951 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.888200 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.888265 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.888275 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.888294 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.888305 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.990415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.990470 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.990482 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.990503 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:35 crc kubenswrapper[4888]: I1006 15:02:35.990518 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:35Z","lastTransitionTime":"2025-10-06T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.093380 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.093415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.093425 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.093438 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.093447 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.195791 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.195849 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.195860 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.195876 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.195887 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.298604 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.298654 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.298665 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.298684 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.298699 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.402067 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.402144 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.402170 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.402201 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.402226 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.505595 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.505653 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.505670 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.505694 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.505710 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.608193 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.608228 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.608237 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.608251 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.608261 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.710758 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.710820 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.710832 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.710853 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.710866 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.813881 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.813922 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.813934 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.813953 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.813965 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.916298 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.916349 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.916369 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.916394 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.916413 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:36Z","lastTransitionTime":"2025-10-06T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.921056 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.921090 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.921068 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:36 crc kubenswrapper[4888]: E1006 15:02:36.921170 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:36 crc kubenswrapper[4888]: I1006 15:02:36.921220 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:36 crc kubenswrapper[4888]: E1006 15:02:36.921298 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:36 crc kubenswrapper[4888]: E1006 15:02:36.921426 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:36 crc kubenswrapper[4888]: E1006 15:02:36.921517 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.018647 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.018696 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.018709 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.018730 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.018747 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.123649 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.123723 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.123746 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.123777 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.123838 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.227412 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.227478 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.227497 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.227522 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.227547 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.369509 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.369560 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.369569 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.369584 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.369593 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.471744 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.471784 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.471821 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.471839 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.471850 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.574552 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.574602 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.574614 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.574630 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.574642 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.676866 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.676932 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.676951 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.676977 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.676996 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.780034 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.780129 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.780147 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.780172 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.780188 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.883139 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.883181 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.883202 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.883222 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.883237 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.985448 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.985507 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.985524 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.985548 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:37 crc kubenswrapper[4888]: I1006 15:02:37.985567 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:37Z","lastTransitionTime":"2025-10-06T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.087651 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.087752 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.087770 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.087840 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.087860 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.189840 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.189911 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.189936 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.189968 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.189990 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.292168 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.292612 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.292654 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.292677 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.292691 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.394706 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.394753 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.394765 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.394784 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.394814 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.496978 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.497054 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.497072 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.497097 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.497115 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.605215 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.605279 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.605298 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.605320 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.605338 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.708477 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.708535 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.708552 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.708572 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.708588 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.811004 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.811057 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.811075 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.811092 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.811103 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.913424 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.913471 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.913482 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.913503 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.913517 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:38Z","lastTransitionTime":"2025-10-06T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.921091 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:38 crc kubenswrapper[4888]: E1006 15:02:38.921295 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.921565 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:38 crc kubenswrapper[4888]: E1006 15:02:38.921712 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.922005 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.922014 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:38 crc kubenswrapper[4888]: E1006 15:02:38.922181 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:38 crc kubenswrapper[4888]: E1006 15:02:38.922297 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:38 crc kubenswrapper[4888]: I1006 15:02:38.987370 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:38 crc kubenswrapper[4888]: E1006 15:02:38.987543 4888 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:02:38 crc kubenswrapper[4888]: E1006 15:02:38.987644 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs podName:2aee40f4-3a30-43cb-aa49-aabcf3c074b7 nodeName:}" failed. No retries permitted until 2025-10-06 15:03:42.987621232 +0000 UTC m=+162.799971950 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs") pod "network-metrics-daemon-hm59m" (UID: "2aee40f4-3a30-43cb-aa49-aabcf3c074b7") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.016360 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.016415 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.016426 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.016444 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.016460 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:39Z","lastTransitionTime":"2025-10-06T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.066142 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.066222 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.066236 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.066254 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.066288 4888 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T15:02:39Z","lastTransitionTime":"2025-10-06T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.110391 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc"] Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.110953 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: W1006 15:02:39.112843 4888 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: secrets "cluster-version-operator-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Oct 06 15:02:39 crc kubenswrapper[4888]: E1006 15:02:39.112897 4888 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-version-operator-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 06 15:02:39 crc kubenswrapper[4888]: W1006 15:02:39.113335 4888 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Oct 06 15:02:39 crc kubenswrapper[4888]: E1006 15:02:39.113387 4888 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 06 15:02:39 crc kubenswrapper[4888]: W1006 15:02:39.115061 4888 reflector.go:561] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": failed to list *v1.Secret: secrets "default-dockercfg-gxtc4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-version": no relationship found between node 'crc' and this object Oct 06 15:02:39 crc kubenswrapper[4888]: E1006 15:02:39.115087 4888 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"default-dockercfg-gxtc4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-gxtc4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-version\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.116369 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.153812 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.153769779 podStartE2EDuration="1m17.153769779s" podCreationTimestamp="2025-10-06 15:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.131450479 +0000 UTC m=+98.943801227" watchObservedRunningTime="2025-10-06 15:02:39.153769779 +0000 UTC m=+98.966120487" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.224674 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.224653683 podStartE2EDuration="9.224653683s" podCreationTimestamp="2025-10-06 15:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.223697555 +0000 UTC m=+99.036048283" watchObservedRunningTime="2025-10-06 15:02:39.224653683 +0000 UTC m=+99.037004401" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.290868 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfe7e284-82dd-4f89-b968-b6bfa5fab229-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.290921 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bfe7e284-82dd-4f89-b968-b6bfa5fab229-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.290943 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bfe7e284-82dd-4f89-b968-b6bfa5fab229-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.290960 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfe7e284-82dd-4f89-b968-b6bfa5fab229-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.291000 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bfe7e284-82dd-4f89-b968-b6bfa5fab229-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.293176 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rwfbx" podStartSLOduration=79.293160826 podStartE2EDuration="1m19.293160826s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.281391784 +0000 UTC m=+99.093742512" watchObservedRunningTime="2025-10-06 15:02:39.293160826 +0000 UTC m=+99.105511544" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.293316 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podStartSLOduration=79.293311971 podStartE2EDuration="1m19.293311971s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.292629701 +0000 UTC m=+99.104980419" watchObservedRunningTime="2025-10-06 15:02:39.293311971 +0000 UTC m=+99.105662689" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.330936 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hw8s9" podStartSLOduration=78.330916246 podStartE2EDuration="1m18.330916246s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.307260217 +0000 UTC m=+99.119610935" watchObservedRunningTime="2025-10-06 15:02:39.330916246 +0000 UTC m=+99.143266964" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.363442 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.363421172 podStartE2EDuration="21.363421172s" podCreationTimestamp="2025-10-06 15:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.348176189 +0000 UTC m=+99.160526907" watchObservedRunningTime="2025-10-06 15:02:39.363421172 +0000 UTC m=+99.175771890" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.363749 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.363743462 podStartE2EDuration="1m12.363743462s" podCreationTimestamp="2025-10-06 15:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.362513286 +0000 UTC m=+99.174863994" watchObservedRunningTime="2025-10-06 15:02:39.363743462 +0000 UTC m=+99.176094180" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.392262 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfe7e284-82dd-4f89-b968-b6bfa5fab229-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.392308 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bfe7e284-82dd-4f89-b968-b6bfa5fab229-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.392331 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bfe7e284-82dd-4f89-b968-b6bfa5fab229-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.392347 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfe7e284-82dd-4f89-b968-b6bfa5fab229-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.392378 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bfe7e284-82dd-4f89-b968-b6bfa5fab229-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.392447 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bfe7e284-82dd-4f89-b968-b6bfa5fab229-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.392459 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bfe7e284-82dd-4f89-b968-b6bfa5fab229-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.393193 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bfe7e284-82dd-4f89-b968-b6bfa5fab229-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.402079 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dk65d" podStartSLOduration=78.402063757 podStartE2EDuration="1m18.402063757s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.400922674 +0000 UTC m=+99.213273392" watchObservedRunningTime="2025-10-06 15:02:39.402063757 +0000 UTC m=+99.214414475" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.413311 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-h2xmp" podStartSLOduration=79.413288984 podStartE2EDuration="1m19.413288984s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.413063377 +0000 UTC m=+99.225414105" watchObservedRunningTime="2025-10-06 15:02:39.413288984 +0000 UTC m=+99.225639702" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.431383 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-p4wzl" podStartSLOduration=78.431362551 podStartE2EDuration="1m18.431362551s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.430979999 +0000 UTC m=+99.243330727" watchObservedRunningTime="2025-10-06 15:02:39.431362551 +0000 UTC m=+99.243713279" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.446593 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.446574884 podStartE2EDuration="48.446574884s" podCreationTimestamp="2025-10-06 15:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:39.445050909 +0000 UTC m=+99.257401627" watchObservedRunningTime="2025-10-06 15:02:39.446574884 +0000 UTC m=+99.258925592" Oct 06 15:02:39 crc kubenswrapper[4888]: I1006 15:02:39.921001 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:02:39 crc kubenswrapper[4888]: E1006 15:02:39.921366 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hzx2q_openshift-ovn-kubernetes(61cf5a40-f739-4ffe-8544-34bcd92aadc1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.012015 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.022617 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfe7e284-82dd-4f89-b968-b6bfa5fab229-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.122464 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.127323 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfe7e284-82dd-4f89-b968-b6bfa5fab229-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zgwqc\" (UID: \"bfe7e284-82dd-4f89-b968-b6bfa5fab229\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.644917 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.647932 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.920437 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.920450 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.920504 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:40 crc kubenswrapper[4888]: I1006 15:02:40.920541 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:40 crc kubenswrapper[4888]: E1006 15:02:40.921834 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:40 crc kubenswrapper[4888]: E1006 15:02:40.922047 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:40 crc kubenswrapper[4888]: E1006 15:02:40.922206 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:40 crc kubenswrapper[4888]: E1006 15:02:40.922241 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:41 crc kubenswrapper[4888]: I1006 15:02:41.459433 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" event={"ID":"bfe7e284-82dd-4f89-b968-b6bfa5fab229","Type":"ContainerStarted","Data":"6ddde93d0d54d5dca01687a1585103005427caa327db8f3ab885d64f8560d8e1"} Oct 06 15:02:41 crc kubenswrapper[4888]: I1006 15:02:41.459490 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" event={"ID":"bfe7e284-82dd-4f89-b968-b6bfa5fab229","Type":"ContainerStarted","Data":"41141c852fba544bf17cf3922ca8a34688d3699b5d093217f53b01992ec1ddba"} Oct 06 15:02:41 crc kubenswrapper[4888]: I1006 15:02:41.473865 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zgwqc" podStartSLOduration=81.473790882 podStartE2EDuration="1m21.473790882s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:41.472690119 +0000 UTC m=+101.285040837" watchObservedRunningTime="2025-10-06 15:02:41.473790882 +0000 UTC m=+101.286141670" Oct 06 15:02:42 crc kubenswrapper[4888]: I1006 15:02:42.920675 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:42 crc kubenswrapper[4888]: I1006 15:02:42.920978 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:42 crc kubenswrapper[4888]: I1006 15:02:42.921032 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:42 crc kubenswrapper[4888]: I1006 15:02:42.921231 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:42 crc kubenswrapper[4888]: E1006 15:02:42.921331 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:42 crc kubenswrapper[4888]: E1006 15:02:42.921464 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:42 crc kubenswrapper[4888]: E1006 15:02:42.921588 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:42 crc kubenswrapper[4888]: E1006 15:02:42.921647 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:44 crc kubenswrapper[4888]: I1006 15:02:44.920356 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:44 crc kubenswrapper[4888]: I1006 15:02:44.920386 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:44 crc kubenswrapper[4888]: I1006 15:02:44.920400 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:44 crc kubenswrapper[4888]: I1006 15:02:44.920362 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:44 crc kubenswrapper[4888]: E1006 15:02:44.920541 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:44 crc kubenswrapper[4888]: E1006 15:02:44.920624 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:44 crc kubenswrapper[4888]: E1006 15:02:44.920721 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:44 crc kubenswrapper[4888]: E1006 15:02:44.920867 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:46 crc kubenswrapper[4888]: I1006 15:02:46.920576 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:46 crc kubenswrapper[4888]: E1006 15:02:46.921477 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:46 crc kubenswrapper[4888]: I1006 15:02:46.920781 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:46 crc kubenswrapper[4888]: I1006 15:02:46.920817 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:46 crc kubenswrapper[4888]: I1006 15:02:46.920743 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:46 crc kubenswrapper[4888]: E1006 15:02:46.921773 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:46 crc kubenswrapper[4888]: E1006 15:02:46.922094 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:46 crc kubenswrapper[4888]: E1006 15:02:46.921903 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:48 crc kubenswrapper[4888]: I1006 15:02:48.920928 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:48 crc kubenswrapper[4888]: I1006 15:02:48.920991 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:48 crc kubenswrapper[4888]: I1006 15:02:48.920930 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:48 crc kubenswrapper[4888]: E1006 15:02:48.921086 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:48 crc kubenswrapper[4888]: I1006 15:02:48.921150 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:48 crc kubenswrapper[4888]: E1006 15:02:48.921211 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:48 crc kubenswrapper[4888]: E1006 15:02:48.921376 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:48 crc kubenswrapper[4888]: E1006 15:02:48.921493 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:50 crc kubenswrapper[4888]: I1006 15:02:50.920654 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:50 crc kubenswrapper[4888]: I1006 15:02:50.920666 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:50 crc kubenswrapper[4888]: I1006 15:02:50.920755 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:50 crc kubenswrapper[4888]: I1006 15:02:50.921412 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:50 crc kubenswrapper[4888]: E1006 15:02:50.921542 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:50 crc kubenswrapper[4888]: E1006 15:02:50.921582 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:50 crc kubenswrapper[4888]: E1006 15:02:50.921639 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:50 crc kubenswrapper[4888]: E1006 15:02:50.921974 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:50 crc kubenswrapper[4888]: I1006 15:02:50.922222 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:02:51 crc kubenswrapper[4888]: I1006 15:02:51.496551 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/3.log" Oct 06 15:02:51 crc kubenswrapper[4888]: I1006 15:02:51.500453 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerStarted","Data":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} Oct 06 15:02:51 crc kubenswrapper[4888]: I1006 15:02:51.501009 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:02:51 crc kubenswrapper[4888]: I1006 15:02:51.532922 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podStartSLOduration=90.532904688 podStartE2EDuration="1m30.532904688s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:51.532252409 +0000 UTC m=+111.344603147" watchObservedRunningTime="2025-10-06 15:02:51.532904688 +0000 UTC m=+111.345255406" Oct 06 15:02:51 crc kubenswrapper[4888]: I1006 15:02:51.714354 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hm59m"] Oct 06 15:02:51 crc kubenswrapper[4888]: I1006 15:02:51.714455 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:51 crc kubenswrapper[4888]: E1006 15:02:51.714540 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:52 crc kubenswrapper[4888]: I1006 15:02:52.920954 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:52 crc kubenswrapper[4888]: E1006 15:02:52.921146 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 15:02:52 crc kubenswrapper[4888]: I1006 15:02:52.920972 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:52 crc kubenswrapper[4888]: E1006 15:02:52.921302 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hm59m" podUID="2aee40f4-3a30-43cb-aa49-aabcf3c074b7" Oct 06 15:02:52 crc kubenswrapper[4888]: I1006 15:02:52.920977 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:52 crc kubenswrapper[4888]: E1006 15:02:52.921383 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 15:02:52 crc kubenswrapper[4888]: I1006 15:02:52.920964 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:52 crc kubenswrapper[4888]: E1006 15:02:52.921460 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.254657 4888 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.254854 4888 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.292260 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4r4q2"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.292710 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.296523 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.296930 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.297100 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.297522 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.298172 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.298587 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.300765 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.300836 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.300935 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.301007 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.301127 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.301366 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.301542 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.301589 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.301671 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.301774 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.302427 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.302524 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.302659 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.303131 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.303850 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.304130 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.304388 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.307971 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.308280 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.308374 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.308391 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.308451 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.312586 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.317900 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dqtkw"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.318847 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.333862 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.334854 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkw2s"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.335336 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.335692 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.335888 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.335973 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.336037 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.336302 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7g2cw"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.336641 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.346868 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6wlzb"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.347307 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.347692 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.348266 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6wlzb" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.348277 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j2xlv"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.348971 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.349419 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.351412 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n956g"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.351969 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.357657 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vrp9q"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.358256 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.358643 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.359012 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pv57g"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.359260 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.359449 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.361737 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.362260 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.362781 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.362974 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363016 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363042 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-etcd-serving-ca\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363063 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363083 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/661fb82e-5117-41bb-a175-bf72f6c288bd-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363105 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-client-ca\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363123 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a856af6d-ccf3-46be-9ad5-81206cec4cee-machine-approver-tls\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363142 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-policies\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363162 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gprn\" (UniqueName: \"kubernetes.io/projected/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-kube-api-access-7gprn\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363182 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-encryption-config\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363217 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363239 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-node-pullsecrets\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363259 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-encryption-config\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363282 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57kdk\" (UniqueName: \"kubernetes.io/projected/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-kube-api-access-57kdk\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363313 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-serving-cert\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363340 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363358 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a856af6d-ccf3-46be-9ad5-81206cec4cee-auth-proxy-config\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363378 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-images\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363400 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363419 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363437 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/050b3bfa-d33d-4729-b4b5-088f03ab45ab-serving-cert\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363455 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4879a6-596d-45ab-acfa-d1d50894efd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363487 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnzc6\" (UniqueName: \"kubernetes.io/projected/661fb82e-5117-41bb-a175-bf72f6c288bd-kube-api-access-wnzc6\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363508 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-config\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363528 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-etcd-client\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363548 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363562 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-config\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363575 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-dir\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363589 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363607 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59m7\" (UniqueName: \"kubernetes.io/projected/4c4879a6-596d-45ab-acfa-d1d50894efd9-kube-api-access-j59m7\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363621 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-etcd-client\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363636 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661fb82e-5117-41bb-a175-bf72f6c288bd-serving-cert\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363650 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-audit\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363669 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363682 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363698 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-image-import-ca\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363713 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363728 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363744 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363764 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-audit-dir\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363777 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqgzt\" (UniqueName: \"kubernetes.io/projected/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-kube-api-access-hqgzt\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363791 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltsn\" (UniqueName: \"kubernetes.io/projected/050b3bfa-d33d-4729-b4b5-088f03ab45ab-kube-api-access-xltsn\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363829 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363849 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a856af6d-ccf3-46be-9ad5-81206cec4cee-config\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363870 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363891 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363911 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-serving-cert\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363933 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-audit-policies\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363953 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.363985 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364070 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs6jx\" (UniqueName: \"kubernetes.io/projected/6093f83d-6829-4712-91d0-eeed9f69d78d-kube-api-access-rs6jx\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364098 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-config\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364119 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stqb4\" (UniqueName: \"kubernetes.io/projected/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-kube-api-access-stqb4\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364156 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-config\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364179 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-audit-dir\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364222 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364225 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq662\" (UniqueName: \"kubernetes.io/projected/a856af6d-ccf3-46be-9ad5-81206cec4cee-kube-api-access-hq662\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364356 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4879a6-596d-45ab-acfa-d1d50894efd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364711 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.364870 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.365019 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.365123 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.365245 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.365347 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.366249 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.366405 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.366538 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.366642 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.368145 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.368643 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.369068 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.369529 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.369557 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.370051 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.372325 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.372606 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.374934 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.375645 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.376135 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.376455 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.376486 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.376642 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.376672 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.376831 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.377092 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.377205 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.377423 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.377640 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.377740 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.378107 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.378201 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.378437 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.378598 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.378658 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.378730 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379279 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379414 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379423 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379656 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379747 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379818 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379953 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379972 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.380148 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.380196 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.380240 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.380334 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.380468 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.380585 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.380704 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.380831 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381152 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381153 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381287 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381415 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381446 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381555 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381672 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381723 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.381913 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.382019 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.382068 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.379751 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.382023 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.382217 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.399701 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.400264 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.402533 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.402736 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ghnft"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.414939 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.417193 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p26ff"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.426720 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.427848 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.428121 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.428311 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.433337 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ftsgm"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.433992 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.439314 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.440123 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.441685 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.442176 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.442299 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.442415 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.445366 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.445696 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.446886 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.449670 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.458551 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.459007 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.459160 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.459296 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.459781 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.460175 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.460583 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.460880 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.461201 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.464783 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.465630 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.465985 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5vzr4"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.466669 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-t5brn"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467332 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gprn\" (UniqueName: \"kubernetes.io/projected/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-kube-api-access-7gprn\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467377 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-encryption-config\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467404 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjzz\" (UniqueName: \"kubernetes.io/projected/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-kube-api-access-fcjzz\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467428 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-config\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467449 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl6mq\" (UniqueName: \"kubernetes.io/projected/58cfe6b4-ea63-4ea9-86db-09034644d817-kube-api-access-xl6mq\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467472 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-node-pullsecrets\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467493 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-encryption-config\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467510 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57kdk\" (UniqueName: \"kubernetes.io/projected/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-kube-api-access-57kdk\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467537 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467558 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc275\" (UniqueName: \"kubernetes.io/projected/1fe6b2fe-b6ea-49eb-8f71-552c70f42e37-kube-api-access-sc275\") pod \"downloads-7954f5f757-6wlzb\" (UID: \"1fe6b2fe-b6ea-49eb-8f71-552c70f42e37\") " pod="openshift-console/downloads-7954f5f757-6wlzb" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467579 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58cfe6b4-ea63-4ea9-86db-09034644d817-srv-cert\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467597 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-serving-cert\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467628 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467648 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/75cff788-fdba-4c8b-b765-d8a5c01b39a6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467669 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a856af6d-ccf3-46be-9ad5-81206cec4cee-auth-proxy-config\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467690 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467712 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdc8\" (UniqueName: \"kubernetes.io/projected/1d6dbd47-5f20-486f-864c-7042f45c2ab4-kube-api-access-tvdc8\") pod \"cluster-samples-operator-665b6dd947-4wdjj\" (UID: \"1d6dbd47-5f20-486f-864c-7042f45c2ab4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467732 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-images\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467753 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467772 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9ca1572-99ce-4516-96ae-1a9772e4cb35-config-volume\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467823 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39249375-9962-457a-88bf-88c67b0ae936-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467852 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/050b3bfa-d33d-4729-b4b5-088f03ab45ab-serving-cert\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467874 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4879a6-596d-45ab-acfa-d1d50894efd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467894 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnzc6\" (UniqueName: \"kubernetes.io/projected/661fb82e-5117-41bb-a175-bf72f6c288bd-kube-api-access-wnzc6\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467913 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-config\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467934 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-config\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467954 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-etcd-client\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467975 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467997 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdfhn\" (UniqueName: \"kubernetes.io/projected/e9ca1572-99ce-4516-96ae-1a9772e4cb35-kube-api-access-vdfhn\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468018 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-config\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468037 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-dir\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468058 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468079 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-etcd-client\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468099 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-oauth-config\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468117 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-service-ca\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468146 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59m7\" (UniqueName: \"kubernetes.io/projected/4c4879a6-596d-45ab-acfa-d1d50894efd9-kube-api-access-j59m7\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468165 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58cfe6b4-ea63-4ea9-86db-09034644d817-profile-collector-cert\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468197 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661fb82e-5117-41bb-a175-bf72f6c288bd-serving-cert\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468221 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-audit\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468241 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003dd26e-1863-4682-af68-09b3584a44d6-serving-cert\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468263 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-config\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468285 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468305 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468327 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-serving-cert\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468349 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsc79\" (UniqueName: \"kubernetes.io/projected/39249375-9962-457a-88bf-88c67b0ae936-kube-api-access-qsc79\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468372 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468395 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-image-import-ca\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468416 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468436 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468457 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/75cff788-fdba-4c8b-b765-d8a5c01b39a6-srv-cert\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468479 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-audit-dir\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468502 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqgzt\" (UniqueName: \"kubernetes.io/projected/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-kube-api-access-hqgzt\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468524 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xltsn\" (UniqueName: \"kubernetes.io/projected/050b3bfa-d33d-4729-b4b5-088f03ab45ab-kube-api-access-xltsn\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468559 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468582 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjlct\" (UniqueName: \"kubernetes.io/projected/75cff788-fdba-4c8b-b765-d8a5c01b39a6-kube-api-access-qjlct\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468604 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468624 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468643 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-serving-cert\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468666 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a856af6d-ccf3-46be-9ad5-81206cec4cee-config\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468689 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db8md\" (UniqueName: \"kubernetes.io/projected/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-kube-api-access-db8md\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468709 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-etcd-ca\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468729 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-audit-policies\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468751 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468771 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468810 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stqb4\" (UniqueName: \"kubernetes.io/projected/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-kube-api-access-stqb4\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468831 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs6jx\" (UniqueName: \"kubernetes.io/projected/6093f83d-6829-4712-91d0-eeed9f69d78d-kube-api-access-rs6jx\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468852 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-config\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468876 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-trusted-ca-bundle\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468896 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-audit-dir\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468917 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-config\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468940 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-oauth-serving-cert\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468966 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq662\" (UniqueName: \"kubernetes.io/projected/a856af6d-ccf3-46be-9ad5-81206cec4cee-kube-api-access-hq662\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.468988 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4879a6-596d-45ab-acfa-d1d50894efd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469009 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39249375-9962-457a-88bf-88c67b0ae936-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469030 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l5z2\" (UniqueName: \"kubernetes.io/projected/003dd26e-1863-4682-af68-09b3584a44d6-kube-api-access-2l5z2\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469056 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469077 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469097 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9ca1572-99ce-4516-96ae-1a9772e4cb35-secret-volume\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469118 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-serving-cert\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469150 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-etcd-serving-ca\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469169 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-etcd-service-ca\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469188 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/003dd26e-1863-4682-af68-09b3584a44d6-etcd-client\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469211 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/661fb82e-5117-41bb-a175-bf72f6c288bd-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469236 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-client-ca\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469255 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a856af6d-ccf3-46be-9ad5-81206cec4cee-machine-approver-tls\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469276 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-policies\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.469297 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d6dbd47-5f20-486f-864c-7042f45c2ab4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wdjj\" (UID: \"1d6dbd47-5f20-486f-864c-7042f45c2ab4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.467359 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.470251 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-29s58"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.489372 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.490855 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.492009 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.495346 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.496516 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.496781 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-29s58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.489404 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-encryption-config\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.497112 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.499296 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.501144 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.502940 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-node-pullsecrets\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.503306 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-etcd-serving-ca\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.503860 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-client-ca\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.507882 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.508546 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.509293 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-config\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.509735 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.510094 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-policies\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.507913 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-encryption-config\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.511724 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-config\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.513354 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a856af6d-ccf3-46be-9ad5-81206cec4cee-machine-approver-tls\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.513599 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.518579 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-config\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.518773 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-dir\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.519351 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a856af6d-ccf3-46be-9ad5-81206cec4cee-auth-proxy-config\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.523254 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-audit-dir\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.523897 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.524289 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-images\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.525830 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-audit\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.526324 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4879a6-596d-45ab-acfa-d1d50894efd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.526341 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.526873 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a856af6d-ccf3-46be-9ad5-81206cec4cee-config\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.528187 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.528684 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.529580 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-service-ca-bundle\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.531085 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/661fb82e-5117-41bb-a175-bf72f6c288bd-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.530495 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-audit-dir\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.536590 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-audit-policies\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.537012 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.537116 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661fb82e-5117-41bb-a175-bf72f6c288bd-serving-cert\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.537396 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-serving-cert\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.539149 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.539255 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.539497 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.541014 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-etcd-client\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.542167 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-image-import-ca\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.543560 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.543764 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.544503 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.546104 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-etcd-client\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.546935 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.551588 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/050b3bfa-d33d-4729-b4b5-088f03ab45ab-serving-cert\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.552787 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-serving-cert\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.554080 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.555942 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.554522 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4879a6-596d-45ab-acfa-d1d50894efd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.554831 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.554093 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.556370 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-config\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.555252 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.556685 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.555286 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.557437 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.558075 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/050b3bfa-d33d-4729-b4b5-088f03ab45ab-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.558307 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.558927 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bphs2"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.559350 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.559868 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4r4q2"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.559936 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.563087 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.563416 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j2xlv"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.567915 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5g2l"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.569121 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.569135 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkw2s"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.569205 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570106 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-serving-cert\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570140 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsc79\" (UniqueName: \"kubernetes.io/projected/39249375-9962-457a-88bf-88c67b0ae936-kube-api-access-qsc79\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570161 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003dd26e-1863-4682-af68-09b3584a44d6-serving-cert\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570210 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-config\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570236 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/75cff788-fdba-4c8b-b765-d8a5c01b39a6-srv-cert\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570270 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjlct\" (UniqueName: \"kubernetes.io/projected/75cff788-fdba-4c8b-b765-d8a5c01b39a6-kube-api-access-qjlct\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570288 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db8md\" (UniqueName: \"kubernetes.io/projected/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-kube-api-access-db8md\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570301 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-etcd-ca\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570329 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-trusted-ca-bundle\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570351 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-oauth-serving-cert\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570368 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39249375-9962-457a-88bf-88c67b0ae936-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570382 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l5z2\" (UniqueName: \"kubernetes.io/projected/003dd26e-1863-4682-af68-09b3584a44d6-kube-api-access-2l5z2\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570396 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9ca1572-99ce-4516-96ae-1a9772e4cb35-secret-volume\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570409 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-serving-cert\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570431 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-etcd-service-ca\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570447 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/003dd26e-1863-4682-af68-09b3584a44d6-etcd-client\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570463 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d6dbd47-5f20-486f-864c-7042f45c2ab4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wdjj\" (UID: \"1d6dbd47-5f20-486f-864c-7042f45c2ab4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570483 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-config\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570530 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl6mq\" (UniqueName: \"kubernetes.io/projected/58cfe6b4-ea63-4ea9-86db-09034644d817-kube-api-access-xl6mq\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570549 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcjzz\" (UniqueName: \"kubernetes.io/projected/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-kube-api-access-fcjzz\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570578 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58cfe6b4-ea63-4ea9-86db-09034644d817-srv-cert\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570592 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc275\" (UniqueName: \"kubernetes.io/projected/1fe6b2fe-b6ea-49eb-8f71-552c70f42e37-kube-api-access-sc275\") pod \"downloads-7954f5f757-6wlzb\" (UID: \"1fe6b2fe-b6ea-49eb-8f71-552c70f42e37\") " pod="openshift-console/downloads-7954f5f757-6wlzb" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570617 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/75cff788-fdba-4c8b-b765-d8a5c01b39a6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570636 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdc8\" (UniqueName: \"kubernetes.io/projected/1d6dbd47-5f20-486f-864c-7042f45c2ab4-kube-api-access-tvdc8\") pod \"cluster-samples-operator-665b6dd947-4wdjj\" (UID: \"1d6dbd47-5f20-486f-864c-7042f45c2ab4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570652 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9ca1572-99ce-4516-96ae-1a9772e4cb35-config-volume\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570667 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39249375-9962-457a-88bf-88c67b0ae936-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570689 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdfhn\" (UniqueName: \"kubernetes.io/projected/e9ca1572-99ce-4516-96ae-1a9772e4cb35-kube-api-access-vdfhn\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570703 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-config\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570718 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-service-ca\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570740 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-oauth-config\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.570762 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58cfe6b4-ea63-4ea9-86db-09034644d817-profile-collector-cert\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.571612 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.571641 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ghnft"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.572671 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39249375-9962-457a-88bf-88c67b0ae936-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.573100 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-etcd-ca\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.573369 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pv57g"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.574204 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-trusted-ca-bundle\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.574699 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-oauth-serving-cert\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.575279 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003dd26e-1863-4682-af68-09b3584a44d6-serving-cert\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.575344 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-service-ca\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.576003 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-config\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.576507 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/003dd26e-1863-4682-af68-09b3584a44d6-etcd-client\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.577345 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39249375-9962-457a-88bf-88c67b0ae936-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.577882 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-etcd-service-ca\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.577978 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003dd26e-1863-4682-af68-09b3584a44d6-config\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.578107 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d6dbd47-5f20-486f-864c-7042f45c2ab4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wdjj\" (UID: \"1d6dbd47-5f20-486f-864c-7042f45c2ab4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.578448 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-serving-cert\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.580613 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-oauth-config\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.582625 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.583765 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.589240 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7g2cw"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.594521 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.595871 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dqtkw"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.599491 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6wlzb"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.602686 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.606994 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.609568 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.613689 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.615849 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.616710 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n956g"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.618698 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.619461 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ftsgm"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.620664 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.622004 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/58cfe6b4-ea63-4ea9-86db-09034644d817-srv-cert\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.622197 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vrp9q"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.623623 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p26ff"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.625175 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.626673 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bphs2"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.628119 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.629558 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.630902 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pp792"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.631955 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pp792" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.632434 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.633887 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.635249 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-29s58"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.635921 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.636688 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.638292 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.640040 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9ca1572-99ce-4516-96ae-1a9772e4cb35-secret-volume\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.640465 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5vzr4"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.642332 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.644602 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/58cfe6b4-ea63-4ea9-86db-09034644d817-profile-collector-cert\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.647180 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/75cff788-fdba-4c8b-b765-d8a5c01b39a6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.648211 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5g2l"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.651786 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.656670 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.658612 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.661749 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.662768 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pp792"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.664084 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xsrjj"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.665275 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-grr76"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.665421 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.666190 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-grr76" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.666570 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-grr76"] Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.675040 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.695749 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.716080 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.737336 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.755230 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.774690 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.783149 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9ca1572-99ce-4516-96ae-1a9772e4cb35-config-volume\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.794703 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.813524 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/75cff788-fdba-4c8b-b765-d8a5c01b39a6-srv-cert\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.817120 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.835707 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.864560 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.875215 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.895150 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.915543 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.920734 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.920780 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.920734 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.921150 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.935393 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.954742 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.959983 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-serving-cert\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.975612 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.986883 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-config\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:54 crc kubenswrapper[4888]: I1006 15:02:54.995504 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.035331 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.055419 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.075070 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.095220 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.114953 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.155021 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.176337 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.194721 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.214816 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.234497 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.255242 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.275242 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.295400 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.315355 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.335551 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.355932 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.393241 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gprn\" (UniqueName: \"kubernetes.io/projected/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-kube-api-access-7gprn\") pod \"route-controller-manager-6576b87f9c-pzv4t\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.395777 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.415827 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.436397 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.455249 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.473856 4888 request.go:700] Waited for 1.003132346s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&limit=500&resourceVersion=0 Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.476524 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.495539 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.516180 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.536105 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.556051 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.575611 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.595115 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.615455 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.628393 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.635748 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.656309 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.676507 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.695161 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.715875 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.736075 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.756278 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.775779 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.795603 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.801960 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t"] Oct 06 15:02:55 crc kubenswrapper[4888]: W1006 15:02:55.809261 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb536d7_2076_4a86_ba81_e1c746ab6cf6.slice/crio-6e44df419757ee3867172e43f071653bc9e5c8dd3b2a7f2add399b03bc0c88bd WatchSource:0}: Error finding container 6e44df419757ee3867172e43f071653bc9e5c8dd3b2a7f2add399b03bc0c88bd: Status 404 returned error can't find the container with id 6e44df419757ee3867172e43f071653bc9e5c8dd3b2a7f2add399b03bc0c88bd Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.831500 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xltsn\" (UniqueName: \"kubernetes.io/projected/050b3bfa-d33d-4729-b4b5-088f03ab45ab-kube-api-access-xltsn\") pod \"authentication-operator-69f744f599-mkw2s\" (UID: \"050b3bfa-d33d-4729-b4b5-088f03ab45ab\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.850142 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57kdk\" (UniqueName: \"kubernetes.io/projected/ecd7117d-afeb-4c89-a4ba-0b098f9ca84a-kube-api-access-57kdk\") pod \"apiserver-76f77b778f-dqtkw\" (UID: \"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a\") " pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.871566 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs6jx\" (UniqueName: \"kubernetes.io/projected/6093f83d-6829-4712-91d0-eeed9f69d78d-kube-api-access-rs6jx\") pod \"oauth-openshift-558db77b4-7g2cw\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.875297 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.913461 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59m7\" (UniqueName: \"kubernetes.io/projected/4c4879a6-596d-45ab-acfa-d1d50894efd9-kube-api-access-j59m7\") pod \"openshift-apiserver-operator-796bbdcf4f-jzsld\" (UID: \"4c4879a6-596d-45ab-acfa-d1d50894efd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.928654 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq662\" (UniqueName: \"kubernetes.io/projected/a856af6d-ccf3-46be-9ad5-81206cec4cee-kube-api-access-hq662\") pod \"machine-approver-56656f9798-gx2qv\" (UID: \"a856af6d-ccf3-46be-9ad5-81206cec4cee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.948916 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stqb4\" (UniqueName: \"kubernetes.io/projected/01614ba7-e313-44cc-9704-d7ea6bbfc7ed-kube-api-access-stqb4\") pod \"apiserver-7bbb656c7d-tvbpw\" (UID: \"01614ba7-e313-44cc-9704-d7ea6bbfc7ed\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.957112 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.972320 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqgzt\" (UniqueName: \"kubernetes.io/projected/9ef05c84-15d5-413d-baee-70e7ae0e2a8f-kube-api-access-hqgzt\") pod \"machine-api-operator-5694c8668f-4r4q2\" (UID: \"9ef05c84-15d5-413d-baee-70e7ae0e2a8f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.972650 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.991091 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnzc6\" (UniqueName: \"kubernetes.io/projected/661fb82e-5117-41bb-a175-bf72f6c288bd-kube-api-access-wnzc6\") pod \"openshift-config-operator-7777fb866f-j8c9b\" (UID: \"661fb82e-5117-41bb-a175-bf72f6c288bd\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:55 crc kubenswrapper[4888]: I1006 15:02:55.995129 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.018153 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.036478 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.063703 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.075665 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.077778 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.086552 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.096231 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.100641 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.115589 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.123092 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw"] Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.134776 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.138090 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.155249 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.162465 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.175582 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.185320 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dqtkw"] Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.196426 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.204596 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.219866 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.244775 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.263894 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.279536 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.296191 4888 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.319304 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.335173 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld"] Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.355624 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdfhn\" (UniqueName: \"kubernetes.io/projected/e9ca1572-99ce-4516-96ae-1a9772e4cb35-kube-api-access-vdfhn\") pod \"collect-profiles-29329380-hq887\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.371676 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdc8\" (UniqueName: \"kubernetes.io/projected/1d6dbd47-5f20-486f-864c-7042f45c2ab4-kube-api-access-tvdc8\") pod \"cluster-samples-operator-665b6dd947-4wdjj\" (UID: \"1d6dbd47-5f20-486f-864c-7042f45c2ab4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.375874 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mkw2s"] Oct 06 15:02:56 crc kubenswrapper[4888]: W1006 15:02:56.382216 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c4879a6_596d_45ab_acfa_d1d50894efd9.slice/crio-60a7ef6f09e468e2681b5a08effc06f1268006777af9c4465a5171f18240dd65 WatchSource:0}: Error finding container 60a7ef6f09e468e2681b5a08effc06f1268006777af9c4465a5171f18240dd65: Status 404 returned error can't find the container with id 60a7ef6f09e468e2681b5a08effc06f1268006777af9c4465a5171f18240dd65 Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.398150 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l5z2\" (UniqueName: \"kubernetes.io/projected/003dd26e-1863-4682-af68-09b3584a44d6-kube-api-access-2l5z2\") pod \"etcd-operator-b45778765-pv57g\" (UID: \"003dd26e-1863-4682-af68-09b3584a44d6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.406018 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.416897 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7g2cw"] Oct 06 15:02:56 crc kubenswrapper[4888]: W1006 15:02:56.423591 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod050b3bfa_d33d_4729_b4b5_088f03ab45ab.slice/crio-372d58e3b43f7d8b618c9cde7d6acda995bc1f731c4b05a667fae6ad26e919e5 WatchSource:0}: Error finding container 372d58e3b43f7d8b618c9cde7d6acda995bc1f731c4b05a667fae6ad26e919e5: Status 404 returned error can't find the container with id 372d58e3b43f7d8b618c9cde7d6acda995bc1f731c4b05a667fae6ad26e919e5 Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.430528 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcjzz\" (UniqueName: \"kubernetes.io/projected/b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450-kube-api-access-fcjzz\") pod \"service-ca-operator-777779d784-ghnft\" (UID: \"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.433549 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl6mq\" (UniqueName: \"kubernetes.io/projected/58cfe6b4-ea63-4ea9-86db-09034644d817-kube-api-access-xl6mq\") pod \"catalog-operator-68c6474976-4k4rc\" (UID: \"58cfe6b4-ea63-4ea9-86db-09034644d817\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.451875 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4r4q2"] Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.454677 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.462897 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsc79\" (UniqueName: \"kubernetes.io/projected/39249375-9962-457a-88bf-88c67b0ae936-kube-api-access-qsc79\") pod \"openshift-controller-manager-operator-756b6f6bc6-kcs58\" (UID: \"39249375-9962-457a-88bf-88c67b0ae936\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.472656 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjlct\" (UniqueName: \"kubernetes.io/projected/75cff788-fdba-4c8b-b765-d8a5c01b39a6-kube-api-access-qjlct\") pod \"olm-operator-6b444d44fb-cw2lp\" (UID: \"75cff788-fdba-4c8b-b765-d8a5c01b39a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.487100 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.490866 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc275\" (UniqueName: \"kubernetes.io/projected/1fe6b2fe-b6ea-49eb-8f71-552c70f42e37-kube-api-access-sc275\") pod \"downloads-7954f5f757-6wlzb\" (UID: \"1fe6b2fe-b6ea-49eb-8f71-552c70f42e37\") " pod="openshift-console/downloads-7954f5f757-6wlzb" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.494021 4888 request.go:700] Waited for 1.916415685s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.498162 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.508174 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.516963 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 06 15:02:56 crc kubenswrapper[4888]: W1006 15:02:56.520768 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ef05c84_15d5_413d_baee_70e7ae0e2a8f.slice/crio-84aa76abcbfc4934e7ef83286500932b3254f8456dc45260afee061d2595921d WatchSource:0}: Error finding container 84aa76abcbfc4934e7ef83286500932b3254f8456dc45260afee061d2595921d: Status 404 returned error can't find the container with id 84aa76abcbfc4934e7ef83286500932b3254f8456dc45260afee061d2595921d Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.528680 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b"] Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.529796 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db8md\" (UniqueName: \"kubernetes.io/projected/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-kube-api-access-db8md\") pod \"console-f9d7485db-vrp9q\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.535112 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.548403 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.558413 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.580880 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" event={"ID":"4eb536d7-2076-4a86-ba81-e1c746ab6cf6","Type":"ContainerStarted","Data":"042d7871103f4cc24d117f5fa2254a12ca9f367c1753eb2477b5cfa322cb073b"} Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.580949 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" event={"ID":"4eb536d7-2076-4a86-ba81-e1c746ab6cf6","Type":"ContainerStarted","Data":"6e44df419757ee3867172e43f071653bc9e5c8dd3b2a7f2add399b03bc0c88bd"} Oct 06 15:02:56 crc kubenswrapper[4888]: W1006 15:02:56.581070 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod661fb82e_5117_41bb_a175_bf72f6c288bd.slice/crio-875c277cc69c0bae4924d1504dae4c796091421716dcb5e546738d9a24ee90f4 WatchSource:0}: Error finding container 875c277cc69c0bae4924d1504dae4c796091421716dcb5e546738d9a24ee90f4: Status 404 returned error can't find the container with id 875c277cc69c0bae4924d1504dae4c796091421716dcb5e546738d9a24ee90f4 Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.581107 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.581527 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.582224 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" event={"ID":"6093f83d-6829-4712-91d0-eeed9f69d78d","Type":"ContainerStarted","Data":"ef42b5aa35e5efca181900c681adf1fd6442d703cd420b6e71444e8cbbd623d2"} Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.585266 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" event={"ID":"050b3bfa-d33d-4729-b4b5-088f03ab45ab","Type":"ContainerStarted","Data":"372d58e3b43f7d8b618c9cde7d6acda995bc1f731c4b05a667fae6ad26e919e5"} Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.586361 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" event={"ID":"4c4879a6-596d-45ab-acfa-d1d50894efd9","Type":"ContainerStarted","Data":"60a7ef6f09e468e2681b5a08effc06f1268006777af9c4465a5171f18240dd65"} Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.588402 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" event={"ID":"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a","Type":"ContainerStarted","Data":"fb23304a6f88cb67c005b4a6e51a0500db44b43bcbeb421251943967f7f9f62a"} Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.589546 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" event={"ID":"01614ba7-e313-44cc-9704-d7ea6bbfc7ed","Type":"ContainerStarted","Data":"6b6571c53ff1322c46a21f2f82e947e5acf0ef30379d0eb9a142f3bf645a6157"} Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.590455 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" event={"ID":"9ef05c84-15d5-413d-baee-70e7ae0e2a8f","Type":"ContainerStarted","Data":"84aa76abcbfc4934e7ef83286500932b3254f8456dc45260afee061d2595921d"} Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.593638 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" event={"ID":"a856af6d-ccf3-46be-9ad5-81206cec4cee","Type":"ContainerStarted","Data":"8b7043fe4cf582f8cb3ed2e72b6f093f2bb19dc24f9ffdf4722dc5fc4fa00aea"} Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.597196 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.616544 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.639391 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.658747 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.679631 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.696130 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.717249 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.719202 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6wlzb" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.739075 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.740122 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.745997 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.761852 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.776399 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.804865 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.809455 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj"] Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.819598 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.910991 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922352 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/41f43231-5670-4506-8593-4cf40be3f95c-signing-key\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922417 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-bound-sa-token\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922443 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dafa03ea-82b7-49f1-bec8-69bec1c31e50-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922633 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52qxt\" (UniqueName: \"kubernetes.io/projected/41f43231-5670-4506-8593-4cf40be3f95c-kube-api-access-52qxt\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922670 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhs2\" (UniqueName: \"kubernetes.io/projected/8befdf1d-c770-4804-bce0-ef5cc8787c8f-kube-api-access-9hhs2\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922726 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-tls\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922752 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922768 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-trusted-ca\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922783 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922812 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922829 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8befdf1d-c770-4804-bce0-ef5cc8787c8f-config\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922843 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8befdf1d-c770-4804-bce0-ef5cc8787c8f-serving-cert\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.922858 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5686c4e3-454d-4282-88d5-326ee00e2e2a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f792m\" (UID: \"5686c4e3-454d-4282-88d5-326ee00e2e2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923013 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-certificates\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923036 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dafa03ea-82b7-49f1-bec8-69bec1c31e50-config\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923067 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8befdf1d-c770-4804-bce0-ef5cc8787c8f-trusted-ca\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923103 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/41f43231-5670-4506-8593-4cf40be3f95c-signing-cabundle\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923220 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923238 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9gnk\" (UniqueName: \"kubernetes.io/projected/e4c47c9f-0090-4ef0-9c59-a705daef8d94-kube-api-access-d9gnk\") pod \"migrator-59844c95c7-h8lmh\" (UID: \"e4c47c9f-0090-4ef0-9c59-a705daef8d94\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923349 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rn68\" (UniqueName: \"kubernetes.io/projected/5686c4e3-454d-4282-88d5-326ee00e2e2a-kube-api-access-2rn68\") pod \"control-plane-machine-set-operator-78cbb6b69f-f792m\" (UID: \"5686c4e3-454d-4282-88d5-326ee00e2e2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923365 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923396 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dafa03ea-82b7-49f1-bec8-69bec1c31e50-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923465 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b4n7\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-kube-api-access-9b4n7\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.923483 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skqjz\" (UniqueName: \"kubernetes.io/projected/821ef1e1-2128-4c28-9030-8faacb7d5fb7-kube-api-access-skqjz\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:56 crc kubenswrapper[4888]: E1006 15:02:56.932123 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:57.432104457 +0000 UTC m=+117.244455175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:56 crc kubenswrapper[4888]: I1006 15:02:56.973782 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pv57g"] Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028065 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028260 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5z4p\" (UniqueName: \"kubernetes.io/projected/da1ba4fe-4298-43ab-90f4-24796daca3e4-kube-api-access-w5z4p\") pod \"ingress-canary-pp792\" (UID: \"da1ba4fe-4298-43ab-90f4-24796daca3e4\") " pod="openshift-ingress-canary/ingress-canary-pp792" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028279 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w857d\" (UniqueName: \"kubernetes.io/projected/7005cda1-a3ae-48c8-80d4-d5d14496e419-kube-api-access-w857d\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028295 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7cb105c3-30b4-4545-8608-416e248b1345-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028312 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd61879-7692-46b1-85d0-11f19a350bde-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028331 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5kt\" (UniqueName: \"kubernetes.io/projected/4cd61879-7692-46b1-85d0-11f19a350bde-kube-api-access-8p5kt\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028368 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9gnk\" (UniqueName: \"kubernetes.io/projected/e4c47c9f-0090-4ef0-9c59-a705daef8d94-kube-api-access-d9gnk\") pod \"migrator-59844c95c7-h8lmh\" (UID: \"e4c47c9f-0090-4ef0-9c59-a705daef8d94\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028386 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-csi-data-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028403 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/314a92e1-2444-4ebd-a080-7619bed44c0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028417 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-config\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028441 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc56c25b-5e9c-44dc-a333-14e2aa680f44-webhook-cert\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028464 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7005cda1-a3ae-48c8-80d4-d5d14496e419-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028489 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b4n7\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-kube-api-access-9b4n7\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028505 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-client-ca\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028530 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skqjz\" (UniqueName: \"kubernetes.io/projected/821ef1e1-2128-4c28-9030-8faacb7d5fb7-kube-api-access-skqjz\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028548 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-default-certificate\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028572 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmq4\" (UniqueName: \"kubernetes.io/projected/800c2317-34fe-4640-b0fc-d275fccca804-kube-api-access-pmmq4\") pod \"dns-operator-744455d44c-29s58\" (UID: \"800c2317-34fe-4640-b0fc-d275fccca804\") " pod="openshift-dns-operator/dns-operator-744455d44c-29s58" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028595 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5919f36e-dcc7-439b-9660-63f7b8c32b5a-service-ca-bundle\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028610 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cd61879-7692-46b1-85d0-11f19a350bde-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028625 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7pq8\" (UniqueName: \"kubernetes.io/projected/7cb105c3-30b4-4545-8608-416e248b1345-kube-api-access-s7pq8\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028649 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-metrics-certs\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028682 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-bound-sa-token\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028697 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028715 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e701c92-0292-4aef-a963-cafeed71db2f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028748 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsxpr\" (UniqueName: \"kubernetes.io/projected/2451070b-304a-439f-a920-27834d657820-kube-api-access-vsxpr\") pod \"multus-admission-controller-857f4d67dd-5vzr4\" (UID: \"2451070b-304a-439f-a920-27834d657820\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028764 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd61879-7692-46b1-85d0-11f19a350bde-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028808 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52qxt\" (UniqueName: \"kubernetes.io/projected/41f43231-5670-4506-8593-4cf40be3f95c-kube-api-access-52qxt\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028825 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-tls\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028855 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-registration-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028880 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7005cda1-a3ae-48c8-80d4-d5d14496e419-proxy-tls\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028903 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028920 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8befdf1d-c770-4804-bce0-ef5cc8787c8f-serving-cert\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028935 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da1ba4fe-4298-43ab-90f4-24796daca3e4-cert\") pod \"ingress-canary-pp792\" (UID: \"da1ba4fe-4298-43ab-90f4-24796daca3e4\") " pod="openshift-ingress-canary/ingress-canary-pp792" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028950 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2451070b-304a-439f-a920-27834d657820-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5vzr4\" (UID: \"2451070b-304a-439f-a920-27834d657820\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028975 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dafa03ea-82b7-49f1-bec8-69bec1c31e50-config\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.028990 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8befdf1d-c770-4804-bce0-ef5cc8787c8f-trusted-ca\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029013 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/314a92e1-2444-4ebd-a080-7619bed44c0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.029055 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:57.529018779 +0000 UTC m=+117.341369517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029107 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bc56c25b-5e9c-44dc-a333-14e2aa680f44-tmpfs\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029147 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svv6j\" (UniqueName: \"kubernetes.io/projected/94d5add2-07ad-4171-bebc-8129f4819ccb-kube-api-access-svv6j\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029170 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/023bca41-f6db-453c-b936-1f3ebf1d675b-node-bootstrap-token\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029190 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr2nn\" (UniqueName: \"kubernetes.io/projected/1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3-kube-api-access-vr2nn\") pod \"package-server-manager-789f6589d5-gj6m8\" (UID: \"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029724 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp7c7\" (UniqueName: \"kubernetes.io/projected/f1e75e2a-e00f-4299-8d19-83c62c76ed52-kube-api-access-wp7c7\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029754 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e701c92-0292-4aef-a963-cafeed71db2f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029771 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-plugins-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029787 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b65d758-78c2-4e61-8553-2298157b49a3-serving-cert\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029851 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e75e2a-e00f-4299-8d19-83c62c76ed52-trusted-ca\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029870 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/023bca41-f6db-453c-b936-1f3ebf1d675b-certs\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029895 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029911 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvnm7\" (UniqueName: \"kubernetes.io/projected/023bca41-f6db-453c-b936-1f3ebf1d675b-kube-api-access-bvnm7\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029928 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029946 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rn68\" (UniqueName: \"kubernetes.io/projected/5686c4e3-454d-4282-88d5-326ee00e2e2a-kube-api-access-2rn68\") pod \"control-plane-machine-set-operator-78cbb6b69f-f792m\" (UID: \"5686c4e3-454d-4282-88d5-326ee00e2e2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.029962 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.033123 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dafa03ea-82b7-49f1-bec8-69bec1c31e50-config\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.033663 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.033702 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-socket-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.033806 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034056 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dafa03ea-82b7-49f1-bec8-69bec1c31e50-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034118 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lct75\" (UniqueName: \"kubernetes.io/projected/5919f36e-dcc7-439b-9660-63f7b8c32b5a-kube-api-access-lct75\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034152 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/800c2317-34fe-4640-b0fc-d275fccca804-metrics-tls\") pod \"dns-operator-744455d44c-29s58\" (UID: \"800c2317-34fe-4640-b0fc-d275fccca804\") " pod="openshift-dns-operator/dns-operator-744455d44c-29s58" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034180 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-config-volume\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034216 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28hpm\" (UniqueName: \"kubernetes.io/projected/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-kube-api-access-28hpm\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034232 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6qk\" (UniqueName: \"kubernetes.io/projected/6e701c92-0292-4aef-a963-cafeed71db2f-kube-api-access-zv6qk\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034250 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7cb105c3-30b4-4545-8608-416e248b1345-proxy-tls\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034279 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-metrics-tls\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034310 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1e75e2a-e00f-4299-8d19-83c62c76ed52-bound-sa-token\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034341 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/314a92e1-2444-4ebd-a080-7619bed44c0d-config\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034423 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpw2m\" (UniqueName: \"kubernetes.io/projected/8b65d758-78c2-4e61-8553-2298157b49a3-kube-api-access-fpw2m\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034538 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gj6m8\" (UID: \"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034565 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-stats-auth\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034597 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/41f43231-5670-4506-8593-4cf40be3f95c-signing-key\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.034617 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-mountpoint-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.035014 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dafa03ea-82b7-49f1-bec8-69bec1c31e50-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.039312 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8befdf1d-c770-4804-bce0-ef5cc8787c8f-serving-cert\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.039722 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8befdf1d-c770-4804-bce0-ef5cc8787c8f-trusted-ca\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.039950 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hhs2\" (UniqueName: \"kubernetes.io/projected/8befdf1d-c770-4804-bce0-ef5cc8787c8f-kube-api-access-9hhs2\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.040166 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc56c25b-5e9c-44dc-a333-14e2aa680f44-apiservice-cert\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.040197 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.040715 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-trusted-ca\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.040772 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.040799 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8befdf1d-c770-4804-bce0-ef5cc8787c8f-config\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.040889 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5686c4e3-454d-4282-88d5-326ee00e2e2a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f792m\" (UID: \"5686c4e3-454d-4282-88d5-326ee00e2e2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.041723 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-trusted-ca\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.042126 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc"] Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.042367 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-certificates\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.042872 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8befdf1d-c770-4804-bce0-ef5cc8787c8f-config\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.042957 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/41f43231-5670-4506-8593-4cf40be3f95c-signing-cabundle\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.043004 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5w4r\" (UniqueName: \"kubernetes.io/projected/bc56c25b-5e9c-44dc-a333-14e2aa680f44-kube-api-access-q5w4r\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.043026 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7005cda1-a3ae-48c8-80d4-d5d14496e419-images\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.043632 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/41f43231-5670-4506-8593-4cf40be3f95c-signing-cabundle\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.044052 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dafa03ea-82b7-49f1-bec8-69bec1c31e50-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.044117 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1e75e2a-e00f-4299-8d19-83c62c76ed52-metrics-tls\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.044358 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.044708 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-certificates\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.045676 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5686c4e3-454d-4282-88d5-326ee00e2e2a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f792m\" (UID: \"5686c4e3-454d-4282-88d5-326ee00e2e2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.046536 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.050259 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/41f43231-5670-4506-8593-4cf40be3f95c-signing-key\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.051886 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-tls\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.054215 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skqjz\" (UniqueName: \"kubernetes.io/projected/821ef1e1-2128-4c28-9030-8faacb7d5fb7-kube-api-access-skqjz\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.061446 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p26ff\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.105620 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9gnk\" (UniqueName: \"kubernetes.io/projected/e4c47c9f-0090-4ef0-9c59-a705daef8d94-kube-api-access-d9gnk\") pod \"migrator-59844c95c7-h8lmh\" (UID: \"e4c47c9f-0090-4ef0-9c59-a705daef8d94\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.134554 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b4n7\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-kube-api-access-9b4n7\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.139253 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.143681 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-bound-sa-token\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148417 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gj6m8\" (UID: \"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148449 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-stats-auth\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148469 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-mountpoint-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148502 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc56c25b-5e9c-44dc-a333-14e2aa680f44-apiservice-cert\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148526 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5w4r\" (UniqueName: \"kubernetes.io/projected/bc56c25b-5e9c-44dc-a333-14e2aa680f44-kube-api-access-q5w4r\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148540 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7005cda1-a3ae-48c8-80d4-d5d14496e419-images\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148559 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1e75e2a-e00f-4299-8d19-83c62c76ed52-metrics-tls\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148574 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7cb105c3-30b4-4545-8608-416e248b1345-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148592 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5z4p\" (UniqueName: \"kubernetes.io/projected/da1ba4fe-4298-43ab-90f4-24796daca3e4-kube-api-access-w5z4p\") pod \"ingress-canary-pp792\" (UID: \"da1ba4fe-4298-43ab-90f4-24796daca3e4\") " pod="openshift-ingress-canary/ingress-canary-pp792" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148606 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w857d\" (UniqueName: \"kubernetes.io/projected/7005cda1-a3ae-48c8-80d4-d5d14496e419-kube-api-access-w857d\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148625 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd61879-7692-46b1-85d0-11f19a350bde-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148646 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p5kt\" (UniqueName: \"kubernetes.io/projected/4cd61879-7692-46b1-85d0-11f19a350bde-kube-api-access-8p5kt\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148666 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-csi-data-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148689 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148710 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-config\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148724 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/314a92e1-2444-4ebd-a080-7619bed44c0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148739 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc56c25b-5e9c-44dc-a333-14e2aa680f44-webhook-cert\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148756 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7005cda1-a3ae-48c8-80d4-d5d14496e419-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148770 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-client-ca\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148785 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-default-certificate\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148805 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmq4\" (UniqueName: \"kubernetes.io/projected/800c2317-34fe-4640-b0fc-d275fccca804-kube-api-access-pmmq4\") pod \"dns-operator-744455d44c-29s58\" (UID: \"800c2317-34fe-4640-b0fc-d275fccca804\") " pod="openshift-dns-operator/dns-operator-744455d44c-29s58" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148822 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cd61879-7692-46b1-85d0-11f19a350bde-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148861 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7pq8\" (UniqueName: \"kubernetes.io/projected/7cb105c3-30b4-4545-8608-416e248b1345-kube-api-access-s7pq8\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148879 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5919f36e-dcc7-439b-9660-63f7b8c32b5a-service-ca-bundle\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148896 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-metrics-certs\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148912 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e701c92-0292-4aef-a963-cafeed71db2f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148927 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsxpr\" (UniqueName: \"kubernetes.io/projected/2451070b-304a-439f-a920-27834d657820-kube-api-access-vsxpr\") pod \"multus-admission-controller-857f4d67dd-5vzr4\" (UID: \"2451070b-304a-439f-a920-27834d657820\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148941 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd61879-7692-46b1-85d0-11f19a350bde-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148957 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.148986 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-registration-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149006 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7005cda1-a3ae-48c8-80d4-d5d14496e419-proxy-tls\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149019 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da1ba4fe-4298-43ab-90f4-24796daca3e4-cert\") pod \"ingress-canary-pp792\" (UID: \"da1ba4fe-4298-43ab-90f4-24796daca3e4\") " pod="openshift-ingress-canary/ingress-canary-pp792" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149033 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2451070b-304a-439f-a920-27834d657820-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5vzr4\" (UID: \"2451070b-304a-439f-a920-27834d657820\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149052 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svv6j\" (UniqueName: \"kubernetes.io/projected/94d5add2-07ad-4171-bebc-8129f4819ccb-kube-api-access-svv6j\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149068 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/023bca41-f6db-453c-b936-1f3ebf1d675b-node-bootstrap-token\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149084 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/314a92e1-2444-4ebd-a080-7619bed44c0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149097 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bc56c25b-5e9c-44dc-a333-14e2aa680f44-tmpfs\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149112 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2nn\" (UniqueName: \"kubernetes.io/projected/1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3-kube-api-access-vr2nn\") pod \"package-server-manager-789f6589d5-gj6m8\" (UID: \"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149111 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887"] Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149126 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp7c7\" (UniqueName: \"kubernetes.io/projected/f1e75e2a-e00f-4299-8d19-83c62c76ed52-kube-api-access-wp7c7\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149238 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e701c92-0292-4aef-a963-cafeed71db2f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149269 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-plugins-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149295 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b65d758-78c2-4e61-8553-2298157b49a3-serving-cert\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149327 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e75e2a-e00f-4299-8d19-83c62c76ed52-trusted-ca\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149364 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/023bca41-f6db-453c-b936-1f3ebf1d675b-certs\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149393 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149425 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvnm7\" (UniqueName: \"kubernetes.io/projected/023bca41-f6db-453c-b936-1f3ebf1d675b-kube-api-access-bvnm7\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149455 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149477 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-socket-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149510 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149547 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lct75\" (UniqueName: \"kubernetes.io/projected/5919f36e-dcc7-439b-9660-63f7b8c32b5a-kube-api-access-lct75\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149571 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/800c2317-34fe-4640-b0fc-d275fccca804-metrics-tls\") pod \"dns-operator-744455d44c-29s58\" (UID: \"800c2317-34fe-4640-b0fc-d275fccca804\") " pod="openshift-dns-operator/dns-operator-744455d44c-29s58" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149597 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-config-volume\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149617 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28hpm\" (UniqueName: \"kubernetes.io/projected/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-kube-api-access-28hpm\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149639 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6qk\" (UniqueName: \"kubernetes.io/projected/6e701c92-0292-4aef-a963-cafeed71db2f-kube-api-access-zv6qk\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149660 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7cb105c3-30b4-4545-8608-416e248b1345-proxy-tls\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149681 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1e75e2a-e00f-4299-8d19-83c62c76ed52-bound-sa-token\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149712 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-metrics-tls\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149733 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/314a92e1-2444-4ebd-a080-7619bed44c0d-config\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.149755 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpw2m\" (UniqueName: \"kubernetes.io/projected/8b65d758-78c2-4e61-8553-2298157b49a3-kube-api-access-fpw2m\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.155469 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e701c92-0292-4aef-a963-cafeed71db2f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.155726 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-plugins-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.160345 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-socket-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.162232 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7005cda1-a3ae-48c8-80d4-d5d14496e419-images\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.162300 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-mountpoint-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.163088 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.163160 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-csi-data-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.163599 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7005cda1-a3ae-48c8-80d4-d5d14496e419-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.169783 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-client-ca\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.170290 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-default-certificate\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.171898 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1e75e2a-e00f-4299-8d19-83c62c76ed52-trusted-ca\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.172322 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gj6m8\" (UID: \"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.172726 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-config-volume\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.172802 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc56c25b-5e9c-44dc-a333-14e2aa680f44-apiservice-cert\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.172890 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rn68\" (UniqueName: \"kubernetes.io/projected/5686c4e3-454d-4282-88d5-326ee00e2e2a-kube-api-access-2rn68\") pod \"control-plane-machine-set-operator-78cbb6b69f-f792m\" (UID: \"5686c4e3-454d-4282-88d5-326ee00e2e2a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.173677 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.174050 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7cb105c3-30b4-4545-8608-416e248b1345-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.174077 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-stats-auth\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.174773 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-config\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.175131 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b65d758-78c2-4e61-8553-2298157b49a3-serving-cert\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.175507 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/800c2317-34fe-4640-b0fc-d275fccca804-metrics-tls\") pod \"dns-operator-744455d44c-29s58\" (UID: \"800c2317-34fe-4640-b0fc-d275fccca804\") " pod="openshift-dns-operator/dns-operator-744455d44c-29s58" Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.175790 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:57.675770991 +0000 UTC m=+117.488121709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.176617 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f1e75e2a-e00f-4299-8d19-83c62c76ed52-metrics-tls\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.181785 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5919f36e-dcc7-439b-9660-63f7b8c32b5a-service-ca-bundle\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.182381 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bc56c25b-5e9c-44dc-a333-14e2aa680f44-tmpfs\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.184061 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/314a92e1-2444-4ebd-a080-7619bed44c0d-config\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.184625 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/023bca41-f6db-453c-b936-1f3ebf1d675b-certs\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.185082 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc56c25b-5e9c-44dc-a333-14e2aa680f44-webhook-cert\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.185965 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.186073 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94d5add2-07ad-4171-bebc-8129f4819ccb-registration-dir\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.186096 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cd61879-7692-46b1-85d0-11f19a350bde-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.186560 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e701c92-0292-4aef-a963-cafeed71db2f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.190161 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/023bca41-f6db-453c-b936-1f3ebf1d675b-node-bootstrap-token\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.190493 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7005cda1-a3ae-48c8-80d4-d5d14496e419-proxy-tls\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.191462 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52qxt\" (UniqueName: \"kubernetes.io/projected/41f43231-5670-4506-8593-4cf40be3f95c-kube-api-access-52qxt\") pod \"service-ca-9c57cc56f-ftsgm\" (UID: \"41f43231-5670-4506-8593-4cf40be3f95c\") " pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.191787 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da1ba4fe-4298-43ab-90f4-24796daca3e4-cert\") pod \"ingress-canary-pp792\" (UID: \"da1ba4fe-4298-43ab-90f4-24796daca3e4\") " pod="openshift-ingress-canary/ingress-canary-pp792" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.192046 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dafa03ea-82b7-49f1-bec8-69bec1c31e50-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-wf5nm\" (UID: \"dafa03ea-82b7-49f1-bec8-69bec1c31e50\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.199008 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2451070b-304a-439f-a920-27834d657820-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5vzr4\" (UID: \"2451070b-304a-439f-a920-27834d657820\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.199286 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5919f36e-dcc7-439b-9660-63f7b8c32b5a-metrics-certs\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.199361 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-metrics-tls\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.200103 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7cb105c3-30b4-4545-8608-416e248b1345-proxy-tls\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.200284 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cd61879-7692-46b1-85d0-11f19a350bde-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.203268 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/314a92e1-2444-4ebd-a080-7619bed44c0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.237866 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vrp9q"] Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.244498 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp7c7\" (UniqueName: \"kubernetes.io/projected/f1e75e2a-e00f-4299-8d19-83c62c76ed52-kube-api-access-wp7c7\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.248038 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hhs2\" (UniqueName: \"kubernetes.io/projected/8befdf1d-c770-4804-bce0-ef5cc8787c8f-kube-api-access-9hhs2\") pod \"console-operator-58897d9998-n956g\" (UID: \"8befdf1d-c770-4804-bce0-ef5cc8787c8f\") " pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.252695 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.253370 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:57.753350299 +0000 UTC m=+117.565701017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.261468 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpw2m\" (UniqueName: \"kubernetes.io/projected/8b65d758-78c2-4e61-8553-2298157b49a3-kube-api-access-fpw2m\") pod \"controller-manager-879f6c89f-bphs2\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.287955 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lct75\" (UniqueName: \"kubernetes.io/projected/5919f36e-dcc7-439b-9660-63f7b8c32b5a-kube-api-access-lct75\") pod \"router-default-5444994796-t5brn\" (UID: \"5919f36e-dcc7-439b-9660-63f7b8c32b5a\") " pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.295897 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.302013 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvnm7\" (UniqueName: \"kubernetes.io/projected/023bca41-f6db-453c-b936-1f3ebf1d675b-kube-api-access-bvnm7\") pod \"machine-config-server-xsrjj\" (UID: \"023bca41-f6db-453c-b936-1f3ebf1d675b\") " pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.321890 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmq4\" (UniqueName: \"kubernetes.io/projected/800c2317-34fe-4640-b0fc-d275fccca804-kube-api-access-pmmq4\") pod \"dns-operator-744455d44c-29s58\" (UID: \"800c2317-34fe-4640-b0fc-d275fccca804\") " pod="openshift-dns-operator/dns-operator-744455d44c-29s58" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.330918 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.331278 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cd61879-7692-46b1-85d0-11f19a350bde-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.338517 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xsrjj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.339281 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6wlzb"] Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.350657 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7pq8\" (UniqueName: \"kubernetes.io/projected/7cb105c3-30b4-4545-8608-416e248b1345-kube-api-access-s7pq8\") pod \"machine-config-controller-84d6567774-q9b62\" (UID: \"7cb105c3-30b4-4545-8608-416e248b1345\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.355527 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.356015 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:57.855992548 +0000 UTC m=+117.668343276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.362619 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.372402 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.376561 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p5kt\" (UniqueName: \"kubernetes.io/projected/4cd61879-7692-46b1-85d0-11f19a350bde-kube-api-access-8p5kt\") pod \"cluster-image-registry-operator-dc59b4c8b-8ggzj\" (UID: \"4cd61879-7692-46b1-85d0-11f19a350bde\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.387989 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.397557 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58"] Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.397608 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp"] Oct 06 15:02:57 crc kubenswrapper[4888]: W1006 15:02:57.398128 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe6b2fe_b6ea_49eb_8f71_552c70f42e37.slice/crio-bf717719a65cb1f4019e844e345dab4c37309309dfa2d683e3017b6faa6b45f7 WatchSource:0}: Error finding container bf717719a65cb1f4019e844e345dab4c37309309dfa2d683e3017b6faa6b45f7: Status 404 returned error can't find the container with id bf717719a65cb1f4019e844e345dab4c37309309dfa2d683e3017b6faa6b45f7 Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.399178 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28hpm\" (UniqueName: \"kubernetes.io/projected/20d7ff7d-bdc3-4bd6-b748-b44da7dc0427-kube-api-access-28hpm\") pod \"dns-default-grr76\" (UID: \"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427\") " pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.411273 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6qk\" (UniqueName: \"kubernetes.io/projected/6e701c92-0292-4aef-a963-cafeed71db2f-kube-api-access-zv6qk\") pod \"kube-storage-version-migrator-operator-b67b599dd-8l7f7\" (UID: \"6e701c92-0292-4aef-a963-cafeed71db2f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.430451 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5w4r\" (UniqueName: \"kubernetes.io/projected/bc56c25b-5e9c-44dc-a333-14e2aa680f44-kube-api-access-q5w4r\") pod \"packageserver-d55dfcdfc-7cv44\" (UID: \"bc56c25b-5e9c-44dc-a333-14e2aa680f44\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.456949 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.457299 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:57.957281056 +0000 UTC m=+117.769631774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.457373 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.465160 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2ngqv\" (UID: \"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.465354 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.473522 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5z4p\" (UniqueName: \"kubernetes.io/projected/da1ba4fe-4298-43ab-90f4-24796daca3e4-kube-api-access-w5z4p\") pod \"ingress-canary-pp792\" (UID: \"da1ba4fe-4298-43ab-90f4-24796daca3e4\") " pod="openshift-ingress-canary/ingress-canary-pp792" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.473965 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.485487 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.491449 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w857d\" (UniqueName: \"kubernetes.io/projected/7005cda1-a3ae-48c8-80d4-d5d14496e419-kube-api-access-w857d\") pod \"machine-config-operator-74547568cd-nktzz\" (UID: \"7005cda1-a3ae-48c8-80d4-d5d14496e419\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.493122 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.526931 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.532861 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ghnft"] Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.587438 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.588091 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.589350 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.589702 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.089688242 +0000 UTC m=+117.902038960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.589833 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-29s58" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.602372 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/314a92e1-2444-4ebd-a080-7619bed44c0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b4qzt\" (UID: \"314a92e1-2444-4ebd-a080-7619bed44c0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.632434 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pp792" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.641057 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-grr76" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.656137 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xsrjj" event={"ID":"023bca41-f6db-453c-b936-1f3ebf1d675b","Type":"ContainerStarted","Data":"729eba54fb4290c38575cdd0301e6d349ff8ed5f47091687ef845598ff5d685b"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.664432 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr2nn\" (UniqueName: \"kubernetes.io/projected/1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3-kube-api-access-vr2nn\") pod \"package-server-manager-789f6589d5-gj6m8\" (UID: \"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.664900 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1e75e2a-e00f-4299-8d19-83c62c76ed52-bound-sa-token\") pod \"ingress-operator-5b745b69d9-69vd6\" (UID: \"f1e75e2a-e00f-4299-8d19-83c62c76ed52\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.665316 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svv6j\" (UniqueName: \"kubernetes.io/projected/94d5add2-07ad-4171-bebc-8129f4819ccb-kube-api-access-svv6j\") pod \"csi-hostpathplugin-s5g2l\" (UID: \"94d5add2-07ad-4171-bebc-8129f4819ccb\") " pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.674167 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsxpr\" (UniqueName: \"kubernetes.io/projected/2451070b-304a-439f-a920-27834d657820-kube-api-access-vsxpr\") pod \"multus-admission-controller-857f4d67dd-5vzr4\" (UID: \"2451070b-304a-439f-a920-27834d657820\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.690160 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.690435 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.190414944 +0000 UTC m=+118.002765662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.700933 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" event={"ID":"003dd26e-1863-4682-af68-09b3584a44d6","Type":"ContainerStarted","Data":"aa6b4d04c53d86846867307c401621f3380e20784e7c472a46308af790c15966"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.700977 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" event={"ID":"003dd26e-1863-4682-af68-09b3584a44d6","Type":"ContainerStarted","Data":"6a441d51591e08e9da958f0064413d07865ad3f35a20c1bde654b6cb7321e95e"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.716325 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" event={"ID":"661fb82e-5117-41bb-a175-bf72f6c288bd","Type":"ContainerDied","Data":"a9959b40c22fcc604f3afd690a5caec5ceb0bf847692e062301a09485c0f560b"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.716123 4888 generic.go:334] "Generic (PLEG): container finished" podID="661fb82e-5117-41bb-a175-bf72f6c288bd" containerID="a9959b40c22fcc604f3afd690a5caec5ceb0bf847692e062301a09485c0f560b" exitCode=0 Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.716993 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" event={"ID":"661fb82e-5117-41bb-a175-bf72f6c288bd","Type":"ContainerStarted","Data":"875c277cc69c0bae4924d1504dae4c796091421716dcb5e546738d9a24ee90f4"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.724276 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p26ff"] Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.725309 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" event={"ID":"4c4879a6-596d-45ab-acfa-d1d50894efd9","Type":"ContainerStarted","Data":"bcd63af556813ed5e701c257f37b16dcb02d214b556b1a6ed247dec4ba910075"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.740673 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" event={"ID":"e9ca1572-99ce-4516-96ae-1a9772e4cb35","Type":"ContainerStarted","Data":"3934e339f36456e964f863234768ded580884e42fa9f31ee7b785252ad69e3f9"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.747512 4888 generic.go:334] "Generic (PLEG): container finished" podID="ecd7117d-afeb-4c89-a4ba-0b098f9ca84a" containerID="06f9e5b2925dd698b20e9f82b114f910a5f3960f0f7af025e927aee08302d4c2" exitCode=0 Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.747570 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" event={"ID":"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a","Type":"ContainerDied","Data":"06f9e5b2925dd698b20e9f82b114f910a5f3960f0f7af025e927aee08302d4c2"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.751241 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vrp9q" event={"ID":"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39","Type":"ContainerStarted","Data":"382e607b01d1926bdb7c53489613bb1baac83a0c3f0969e86e59d3032eb47974"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.761647 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" event={"ID":"1d6dbd47-5f20-486f-864c-7042f45c2ab4","Type":"ContainerStarted","Data":"03f626845378fcd27b00bd4848a533136e04017e4c8bf45b1c9a5c9b1758fbf6"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.761697 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" event={"ID":"1d6dbd47-5f20-486f-864c-7042f45c2ab4","Type":"ContainerStarted","Data":"95396a036ff616fb179657aa2f0642179a6b8303575ba3a81d92aa9114865b8b"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.763576 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" event={"ID":"6093f83d-6829-4712-91d0-eeed9f69d78d","Type":"ContainerStarted","Data":"8a645f738b4680814001eaf6f42c6d439d3719436c0f9d06963f2b6b946c2d6f"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.763948 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.771466 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" event={"ID":"39249375-9962-457a-88bf-88c67b0ae936","Type":"ContainerStarted","Data":"ec98d0ffad2f2d5d5b6b82c6d01bf73ec9a2b5e184319fe3c5cd0e0c8ba36d23"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.775226 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wlzb" event={"ID":"1fe6b2fe-b6ea-49eb-8f71-552c70f42e37","Type":"ContainerStarted","Data":"bf717719a65cb1f4019e844e345dab4c37309309dfa2d683e3017b6faa6b45f7"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.775534 4888 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7g2cw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.775611 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" podUID="6093f83d-6829-4712-91d0-eeed9f69d78d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.780920 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" event={"ID":"58cfe6b4-ea63-4ea9-86db-09034644d817","Type":"ContainerStarted","Data":"194fa7322a4cfeabeaabc6ce380a5c234802c11d16590585edb3f274a71da3e5"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.780968 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" event={"ID":"58cfe6b4-ea63-4ea9-86db-09034644d817","Type":"ContainerStarted","Data":"e50372320ccade117feb1fcaa8053e0a3dee8411510c989e4d2669c056d516f1"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.781367 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.782764 4888 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4k4rc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.782986 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" podUID="58cfe6b4-ea63-4ea9-86db-09034644d817" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.791515 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.794622 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.294605878 +0000 UTC m=+118.106956596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.805717 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" event={"ID":"a856af6d-ccf3-46be-9ad5-81206cec4cee","Type":"ContainerStarted","Data":"7f366214be3a55348d871c475f96484be26072db3fd3f869684805b57b39dd99"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.808582 4888 generic.go:334] "Generic (PLEG): container finished" podID="01614ba7-e313-44cc-9704-d7ea6bbfc7ed" containerID="b816bf3d8ed2a65adba7d64f17bb93a775c69ad8b84f95be26a7f2dc7d2428df" exitCode=0 Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.808634 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" event={"ID":"01614ba7-e313-44cc-9704-d7ea6bbfc7ed","Type":"ContainerDied","Data":"b816bf3d8ed2a65adba7d64f17bb93a775c69ad8b84f95be26a7f2dc7d2428df"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.810538 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" event={"ID":"9ef05c84-15d5-413d-baee-70e7ae0e2a8f","Type":"ContainerStarted","Data":"f2f9dadff1d2158f409c7b38dd28a7955c7126aab3da23b9efb3461e5b06144c"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.810560 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" event={"ID":"9ef05c84-15d5-413d-baee-70e7ae0e2a8f","Type":"ContainerStarted","Data":"fa6d5f780449564c501df44c8f2da7d122953ea540a985e17125989ec2bb2be4"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.811976 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" event={"ID":"050b3bfa-d33d-4729-b4b5-088f03ab45ab","Type":"ContainerStarted","Data":"ac364feca72ffbbf522a5fd348b9b902824d7b62a05d4211a30115f3dff67675"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.814239 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" event={"ID":"75cff788-fdba-4c8b-b765-d8a5c01b39a6","Type":"ContainerStarted","Data":"1f7de9fee503cc771d302a0cf18cccfbcbc86746591284ab6176c04c311f31d4"} Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.838017 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.850595 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.875487 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.881105 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.900444 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:57 crc kubenswrapper[4888]: E1006 15:02:57.902015 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.401988304 +0000 UTC m=+118.214339022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:57 crc kubenswrapper[4888]: I1006 15:02:57.923070 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.006689 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.012634 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.512614784 +0000 UTC m=+118.324965502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.074313 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bphs2"] Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.139707 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.139962 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.639944151 +0000 UTC m=+118.452294869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.174912 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm"] Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.244492 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.244851 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.744820774 +0000 UTC m=+118.557171492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.298967 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh"] Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.347396 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m"] Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.350027 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.350499 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.850481221 +0000 UTC m=+118.662831939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.387362 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-n956g"] Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.453910 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.455276 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:58.955257781 +0000 UTC m=+118.767608499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.518606 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jzsld" podStartSLOduration=98.518587265 podStartE2EDuration="1m38.518587265s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:58.475282244 +0000 UTC m=+118.287632962" watchObservedRunningTime="2025-10-06 15:02:58.518587265 +0000 UTC m=+118.330937983" Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.555933 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.556291 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:59.056272493 +0000 UTC m=+118.868623211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.601445 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4r4q2" podStartSLOduration=97.601423037 podStartE2EDuration="1m37.601423037s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:58.599936603 +0000 UTC m=+118.412287331" watchObservedRunningTime="2025-10-06 15:02:58.601423037 +0000 UTC m=+118.413773755" Oct 06 15:02:58 crc kubenswrapper[4888]: W1006 15:02:58.654621 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8befdf1d_c770_4804_bce0_ef5cc8787c8f.slice/crio-1b4fe8e7cf3a0dbf678736d45bc7852f2e7cb6484fe64659e670d4d6a25d89fa WatchSource:0}: Error finding container 1b4fe8e7cf3a0dbf678736d45bc7852f2e7cb6484fe64659e670d4d6a25d89fa: Status 404 returned error can't find the container with id 1b4fe8e7cf3a0dbf678736d45bc7852f2e7cb6484fe64659e670d4d6a25d89fa Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.668710 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.669065 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:59.169051275 +0000 UTC m=+118.981401993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.721498 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" podStartSLOduration=97.721480162 podStartE2EDuration="1m37.721480162s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:58.65479213 +0000 UTC m=+118.467142848" watchObservedRunningTime="2025-10-06 15:02:58.721480162 +0000 UTC m=+118.533830880" Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.809409 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.811213 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:59.311189554 +0000 UTC m=+119.123540272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.863903 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" podStartSLOduration=97.863883978 podStartE2EDuration="1m37.863883978s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:58.769296734 +0000 UTC m=+118.581647452" watchObservedRunningTime="2025-10-06 15:02:58.863883978 +0000 UTC m=+118.676234696" Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.914161 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:58 crc kubenswrapper[4888]: E1006 15:02:58.914493 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:59.414482271 +0000 UTC m=+119.226832989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:58 crc kubenswrapper[4888]: I1006 15:02:58.956907 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" podStartSLOduration=98.956890656 podStartE2EDuration="1m38.956890656s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:58.944864875 +0000 UTC m=+118.757215593" watchObservedRunningTime="2025-10-06 15:02:58.956890656 +0000 UTC m=+118.769241364" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.044166 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.044480 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:59.544457165 +0000 UTC m=+119.356807883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.123268 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-ftsgm"] Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.123303 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7"] Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.126671 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" event={"ID":"1d6dbd47-5f20-486f-864c-7042f45c2ab4","Type":"ContainerStarted","Data":"6f3e00a6d2f962a0b157cd9ccae2ce6787c84e55591610a7db5c0b3137dc03a3"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.147005 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.147323 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:59.647308129 +0000 UTC m=+119.459658847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.186013 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pv57g" podStartSLOduration=98.185993356 podStartE2EDuration="1m38.185993356s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:59.122500767 +0000 UTC m=+118.934851485" watchObservedRunningTime="2025-10-06 15:02:59.185993356 +0000 UTC m=+118.998344084" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.247388 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6wlzb" event={"ID":"1fe6b2fe-b6ea-49eb-8f71-552c70f42e37","Type":"ContainerStarted","Data":"d4ba886ebf76935457eeac05fc35c40b85ca92b01838c1ddacb96d62f67c78fd"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.248245 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6wlzb" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.248944 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.249687 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:02:59.74967146 +0000 UTC m=+119.562022168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.301953 4888 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wlzb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.302012 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wlzb" podUID="1fe6b2fe-b6ea-49eb-8f71-552c70f42e37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.347181 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t5brn" event={"ID":"5919f36e-dcc7-439b-9660-63f7b8c32b5a","Type":"ContainerStarted","Data":"bc3303f1de147db0a848076d9b865e8dd06af9c2be609111603531f056084b65"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.350088 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.350588 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:02:59.850575467 +0000 UTC m=+119.662926185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.399735 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gx2qv" event={"ID":"a856af6d-ccf3-46be-9ad5-81206cec4cee","Type":"ContainerStarted","Data":"d1285a7b9feac9e1623bec51bd5ced988235f0d7dbff40bdf219d6750dfdd279"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.401234 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-n956g" event={"ID":"8befdf1d-c770-4804-bce0-ef5cc8787c8f","Type":"ContainerStarted","Data":"1b4fe8e7cf3a0dbf678736d45bc7852f2e7cb6484fe64659e670d4d6a25d89fa"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.402103 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" event={"ID":"5686c4e3-454d-4282-88d5-326ee00e2e2a","Type":"ContainerStarted","Data":"102c0ebf3cd226b06c69d4c9dc4b003f074f3e14f568c0b9d16bad265bfad4ff"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.504886 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.536490 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mkw2s" podStartSLOduration=99.536469739 podStartE2EDuration="1m39.536469739s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:59.395579627 +0000 UTC m=+119.207930345" watchObservedRunningTime="2025-10-06 15:02:59.536469739 +0000 UTC m=+119.348820457" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.544124 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" event={"ID":"8b65d758-78c2-4e61-8553-2298157b49a3","Type":"ContainerStarted","Data":"cf7033e517b66243144d9fa958b78fdc8b916eed67643396e1f0906e6789f060"} Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.574442 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.074411484 +0000 UTC m=+119.886762202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.624474 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.625038 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.125023517 +0000 UTC m=+119.937374235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.646230 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj"] Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.734160 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.734484 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.234466234 +0000 UTC m=+120.046816952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.735775 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" event={"ID":"821ef1e1-2128-4c28-9030-8faacb7d5fb7","Type":"ContainerStarted","Data":"df5b569edcdb574f49df3686be9604b624ab4ad3bb5b42e5bd535f725cc9c01e"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.736585 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.767600 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" event={"ID":"75cff788-fdba-4c8b-b765-d8a5c01b39a6","Type":"ContainerStarted","Data":"6904d654dd2a1e3495323ba65b4ffebbd3be9ee1f93276bba3ff5407116713f1"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.768544 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.768614 4888 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p26ff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.768639 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.786142 4888 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cw2lp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.786186 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" podUID="75cff788-fdba-4c8b-b765-d8a5c01b39a6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.840675 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.843219 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.343204549 +0000 UTC m=+120.155555267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.921885 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vrp9q" event={"ID":"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39","Type":"ContainerStarted","Data":"4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.955392 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" event={"ID":"39249375-9962-457a-88bf-88c67b0ae936","Type":"ContainerStarted","Data":"a161e8f229881f1ece3c08272e624f9b9f9b383fd3de2fd542f55f3ebab037bb"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.957005 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.957104 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.457088024 +0000 UTC m=+120.269438742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.957343 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:02:59 crc kubenswrapper[4888]: E1006 15:02:59.957674 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.457664461 +0000 UTC m=+120.270015169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.960734 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" podStartSLOduration=99.96071393 podStartE2EDuration="1m39.96071393s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:59.886583942 +0000 UTC m=+119.698934660" watchObservedRunningTime="2025-10-06 15:02:59.96071393 +0000 UTC m=+119.773064648" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.960986 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" podStartSLOduration=98.960981258 podStartE2EDuration="1m38.960981258s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:02:59.956432215 +0000 UTC m=+119.768782933" watchObservedRunningTime="2025-10-06 15:02:59.960981258 +0000 UTC m=+119.773331986" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.979791 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xsrjj" event={"ID":"023bca41-f6db-453c-b936-1f3ebf1d675b","Type":"ContainerStarted","Data":"7ff3e9c4faaa21d7aa89be15dcda9c8b7b444af86dd4c01ea86adc3f0941499d"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.988863 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" event={"ID":"e4c47c9f-0090-4ef0-9c59-a705daef8d94","Type":"ContainerStarted","Data":"614ae23c3d54aaa4167343dbd3c13844a59975a61314e2ef3579a80fc0716631"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.989751 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" event={"ID":"e9ca1572-99ce-4516-96ae-1a9772e4cb35","Type":"ContainerStarted","Data":"7b27f205f38afd17a23e23c504b8cac7559446dd5eb35f7d02d860147cb9ea46"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.991178 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" event={"ID":"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450","Type":"ContainerStarted","Data":"0771ee6fdecc9dbb6a1db70b9a61d8a3cd32a9bc3f9ae0552305f57faca8bc37"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.991198 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" event={"ID":"b6aeb5ab-e5c0-4044-9eb1-0017d2b8c450","Type":"ContainerStarted","Data":"0f1f8c4ad6e76de3a1d65f44b95b6b8f4f57a822fb325525224655802ef02d57"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.992416 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" event={"ID":"dafa03ea-82b7-49f1-bec8-69bec1c31e50","Type":"ContainerStarted","Data":"3f957fea071999b0ef624502626840e0cde73275c5857010fc73030749bce77a"} Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.994870 4888 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7g2cw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.994902 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" podUID="6093f83d-6829-4712-91d0-eeed9f69d78d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.994962 4888 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4k4rc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Oct 06 15:02:59 crc kubenswrapper[4888]: I1006 15:02:59.994975 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" podUID="58cfe6b4-ea63-4ea9-86db-09034644d817" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.076700 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.077921 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.577716796 +0000 UTC m=+120.390067534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.170361 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pp792"] Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.181918 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.182840 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.682822737 +0000 UTC m=+120.495173455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.283644 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.284111 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.784093075 +0000 UTC m=+120.596443793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.390303 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.398646 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.898611479 +0000 UTC m=+120.710962197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.468265 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kcs58" podStartSLOduration=99.468246576 podStartE2EDuration="1m39.468246576s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:00.466462454 +0000 UTC m=+120.278813172" watchObservedRunningTime="2025-10-06 15:03:00.468246576 +0000 UTC m=+120.280597284" Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.494772 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.495261 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:00.995238232 +0000 UTC m=+120.807588950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.543485 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vrp9q" podStartSLOduration=99.543468566 podStartE2EDuration="1m39.543468566s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:00.541678594 +0000 UTC m=+120.354029312" watchObservedRunningTime="2025-10-06 15:03:00.543468566 +0000 UTC m=+120.355819284" Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.579616 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv"] Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.629866 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.630463 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.130448098 +0000 UTC m=+120.942798816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.632448 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xsrjj" podStartSLOduration=6.632434186 podStartE2EDuration="6.632434186s" podCreationTimestamp="2025-10-06 15:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:00.625549856 +0000 UTC m=+120.437900574" watchObservedRunningTime="2025-10-06 15:03:00.632434186 +0000 UTC m=+120.444784904" Oct 06 15:03:00 crc kubenswrapper[4888]: W1006 15:03:00.678961 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54d3d46a_d4cb_4110_91a1_7e3ab8cba2bb.slice/crio-f5ceddea67930a0c50b1c973227dfc3510f9736dc0ea6dae619e49c3d8ff5ff9 WatchSource:0}: Error finding container f5ceddea67930a0c50b1c973227dfc3510f9736dc0ea6dae619e49c3d8ff5ff9: Status 404 returned error can't find the container with id f5ceddea67930a0c50b1c973227dfc3510f9736dc0ea6dae619e49c3d8ff5ff9 Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.738014 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.738298 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.238281698 +0000 UTC m=+121.050632416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.748903 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44"] Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.750352 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" podStartSLOduration=99.750341289 podStartE2EDuration="1m39.750341289s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:00.749254998 +0000 UTC m=+120.561605716" watchObservedRunningTime="2025-10-06 15:03:00.750341289 +0000 UTC m=+120.562692007" Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.830877 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" podStartSLOduration=99.830857713 podStartE2EDuration="1m39.830857713s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:00.799912912 +0000 UTC m=+120.612263640" watchObservedRunningTime="2025-10-06 15:03:00.830857713 +0000 UTC m=+120.643208431" Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.832644 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" podStartSLOduration=99.832636805 podStartE2EDuration="1m39.832636805s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:00.830215794 +0000 UTC m=+120.642566522" watchObservedRunningTime="2025-10-06 15:03:00.832636805 +0000 UTC m=+120.644987523" Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.855055 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.855609 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.355592343 +0000 UTC m=+121.167943061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:00 crc kubenswrapper[4888]: I1006 15:03:00.971391 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:00 crc kubenswrapper[4888]: E1006 15:03:00.972345 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.472321631 +0000 UTC m=+121.284672349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.034902 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6wlzb" podStartSLOduration=100.034884713 podStartE2EDuration="1m40.034884713s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:01.030588018 +0000 UTC m=+120.842938736" watchObservedRunningTime="2025-10-06 15:03:01.034884713 +0000 UTC m=+120.847235431" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.080811 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.081076 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.581065858 +0000 UTC m=+121.393416576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.085344 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" event={"ID":"6e701c92-0292-4aef-a963-cafeed71db2f","Type":"ContainerStarted","Data":"e386019f59cb67fad47132ae41ad1727b72ba556f202d4014d0876680b778312"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.085404 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" event={"ID":"6e701c92-0292-4aef-a963-cafeed71db2f","Type":"ContainerStarted","Data":"81a43a3f2b8d4ca2042b7594de3908e9115f16626e354cd232603274873eeaf8"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.115485 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" event={"ID":"821ef1e1-2128-4c28-9030-8faacb7d5fb7","Type":"ContainerStarted","Data":"9047af1b88f2929e7e6ef0ed9e258db71888d8e6e8ba78dab50b616d3631b262"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.116765 4888 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p26ff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.116800 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.125580 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" event={"ID":"41f43231-5670-4506-8593-4cf40be3f95c","Type":"ContainerStarted","Data":"78906b3512543cc9e0accbb414d20f41e145d21e8ab8b56afc6cb2e986af35e0"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.125627 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" event={"ID":"41f43231-5670-4506-8593-4cf40be3f95c","Type":"ContainerStarted","Data":"499ba4970f4c58b70314ad1e1ac660cbab082af6c491843526b4ef6704f70cf9"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.139102 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghnft" podStartSLOduration=100.139086936 podStartE2EDuration="1m40.139086936s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:01.121311639 +0000 UTC m=+120.933662357" watchObservedRunningTime="2025-10-06 15:03:01.139086936 +0000 UTC m=+120.951437654" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.159528 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" event={"ID":"4cd61879-7692-46b1-85d0-11f19a350bde","Type":"ContainerStarted","Data":"cf66a1ef1a7fac0fbda89d5600a84032e0e55d48bf329327fdcd37f81b00a823"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.159570 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" event={"ID":"4cd61879-7692-46b1-85d0-11f19a350bde","Type":"ContainerStarted","Data":"3253af4ff80a5947679f0a9d99f5984830e12456ef7bf0b569426256be77cbdf"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.166686 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" event={"ID":"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a","Type":"ContainerStarted","Data":"2fc426a4cd67f775ec005ae19edad5f6c8e55ca2405c192166ecdb8929368e83"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.206914 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.207141 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.707118297 +0000 UTC m=+121.519469025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.210224 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.210910 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.710893137 +0000 UTC m=+121.523243845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.221940 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" event={"ID":"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb","Type":"ContainerStarted","Data":"f5ceddea67930a0c50b1c973227dfc3510f9736dc0ea6dae619e49c3d8ff5ff9"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.298502 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" event={"ID":"dafa03ea-82b7-49f1-bec8-69bec1c31e50","Type":"ContainerStarted","Data":"c4047f6d1e1c0e2d9601487963e4319782bac95a6f4180d34f3363bd25c03ca4"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.319052 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.319317 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.819300373 +0000 UTC m=+121.631651091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.350758 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" event={"ID":"8b65d758-78c2-4e61-8553-2298157b49a3","Type":"ContainerStarted","Data":"96013f6c01db5b3b35dd5a9f325cd177e6ff9c629b2901554b08cc7eb1210b00"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.351704 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.352798 4888 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bphs2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.352838 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.360656 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" event={"ID":"01614ba7-e313-44cc-9704-d7ea6bbfc7ed","Type":"ContainerStarted","Data":"0ee867cd975639ce035ed9e56a564e8612e1c0c3e637e2f8b91c02f6b3fc5d04"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.409666 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pp792" event={"ID":"da1ba4fe-4298-43ab-90f4-24796daca3e4","Type":"ContainerStarted","Data":"6792f9b555e1e2ae52453364c5f1ecc7f0f4ae03df6ed356ec40724ee02a7dbf"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.437332 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" event={"ID":"e4c47c9f-0090-4ef0-9c59-a705daef8d94","Type":"ContainerStarted","Data":"b78911cc820066be6f861324aec88e6d420e7f87c6d553b181c5c24021faa0e7"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.437372 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" event={"ID":"e4c47c9f-0090-4ef0-9c59-a705daef8d94","Type":"ContainerStarted","Data":"7b31e93dc8a2022d28980529940fdcacc6a46db1981c7dcee3bffd7abc27fef7"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.437933 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.439883 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:01.939869843 +0000 UTC m=+121.752220561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.473705 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-n956g" event={"ID":"8befdf1d-c770-4804-bce0-ef5cc8787c8f","Type":"ContainerStarted","Data":"355b4859ea24d1b6e7d74a847c103c8b17662af773f2fa3075564bfb222cfb1a"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.474215 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.475691 4888 patch_prober.go:28] interesting pod/console-operator-58897d9998-n956g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.475728 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-n956g" podUID="8befdf1d-c770-4804-bce0-ef5cc8787c8f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.476444 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t5brn" event={"ID":"5919f36e-dcc7-439b-9660-63f7b8c32b5a","Type":"ContainerStarted","Data":"55f577a64d5d1335b4b04cf4f004a575d59535e6097b01736fb8a47284e1db52"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.495239 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.495576 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.495668 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.495757 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5g2l"] Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.507259 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" event={"ID":"5686c4e3-454d-4282-88d5-326ee00e2e2a","Type":"ContainerStarted","Data":"f32767501c701d5ea54a78a045b9f9576faad1595503cb257263f42965d44466"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.539294 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.539630 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.039608967 +0000 UTC m=+121.851959685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.541329 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" event={"ID":"661fb82e-5117-41bb-a175-bf72f6c288bd","Type":"ContainerStarted","Data":"889e0402577d1b893e2ffcef7affc505dd939fc81be05ae267c65aab7bd964f6"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.542089 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.579843 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-29s58"] Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.589938 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" event={"ID":"bc56c25b-5e9c-44dc-a333-14e2aa680f44","Type":"ContainerStarted","Data":"581df63ff7b160f32c7c7ba5b9b21a7eece7f62b89c30c686b6ebd0ec0132af2"} Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.590739 4888 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wlzb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.590768 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wlzb" podUID="1fe6b2fe-b6ea-49eb-8f71-552c70f42e37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.593135 4888 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cw2lp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.593239 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" podUID="75cff788-fdba-4c8b-b765-d8a5c01b39a6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.593377 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-grr76"] Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.659810 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.668682 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.168655074 +0000 UTC m=+121.981005792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.778191 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.778610 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.278589995 +0000 UTC m=+122.090940713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.836370 4888 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54d3d46a_d4cb_4110_91a1_7e3ab8cba2bb.slice/crio-9b03db949f27f0e42646bcdf6b8c37ae9a84ab54dbbe1827c2c0bc806c74d6c4.scope\": RecentStats: unable to find data in memory cache]" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.859889 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8l7f7" podStartSLOduration=100.85987043 podStartE2EDuration="1m40.85987043s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:01.733655716 +0000 UTC m=+121.546006434" watchObservedRunningTime="2025-10-06 15:03:01.85987043 +0000 UTC m=+121.672221148" Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.884167 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.884549 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.384536359 +0000 UTC m=+122.196887077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.975904 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62"] Oct 06 15:03:01 crc kubenswrapper[4888]: I1006 15:03:01.984882 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:01 crc kubenswrapper[4888]: E1006 15:03:01.989185 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.489151305 +0000 UTC m=+122.301502023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.020093 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8"] Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.060984 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz"] Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.063152 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8ggzj" podStartSLOduration=101.063138718 podStartE2EDuration="1m41.063138718s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.055036503 +0000 UTC m=+121.867387221" watchObservedRunningTime="2025-10-06 15:03:02.063138718 +0000 UTC m=+121.875489436" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.095448 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.095950 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.595931954 +0000 UTC m=+122.408282672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.107168 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt"] Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.144463 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f792m" podStartSLOduration=101.144442546 podStartE2EDuration="1m41.144442546s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.141973554 +0000 UTC m=+121.954324272" watchObservedRunningTime="2025-10-06 15:03:02.144442546 +0000 UTC m=+121.956793264" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.197127 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.197411 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.697394498 +0000 UTC m=+122.509745216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.221955 4888 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j8c9b container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.221972 4888 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j8c9b container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.222011 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" podUID="661fb82e-5117-41bb-a175-bf72f6c288bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.222029 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" podUID="661fb82e-5117-41bb-a175-bf72f6c288bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.289691 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pp792" podStartSLOduration=8.289672654 podStartE2EDuration="8.289672654s" podCreationTimestamp="2025-10-06 15:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.284564445 +0000 UTC m=+122.096915163" watchObservedRunningTime="2025-10-06 15:03:02.289672654 +0000 UTC m=+122.102023372" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.289831 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" podStartSLOduration=101.289827678 podStartE2EDuration="1m41.289827678s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.256828697 +0000 UTC m=+122.069179415" watchObservedRunningTime="2025-10-06 15:03:02.289827678 +0000 UTC m=+122.102178396" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.325637 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.325951 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.825938459 +0000 UTC m=+122.638289177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.348299 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h8lmh" podStartSLOduration=101.34828257 podStartE2EDuration="1m41.34828257s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.347538948 +0000 UTC m=+122.159889676" watchObservedRunningTime="2025-10-06 15:03:02.34828257 +0000 UTC m=+122.160633288" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.375497 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-t5brn" podStartSLOduration=101.375479932 podStartE2EDuration="1m41.375479932s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.372971369 +0000 UTC m=+122.185322097" watchObservedRunningTime="2025-10-06 15:03:02.375479932 +0000 UTC m=+122.187830650" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.391113 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5vzr4"] Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.416852 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-n956g" podStartSLOduration=101.416831486 podStartE2EDuration="1m41.416831486s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.414853449 +0000 UTC m=+122.227204177" watchObservedRunningTime="2025-10-06 15:03:02.416831486 +0000 UTC m=+122.229182204" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.427346 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.427629 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:02.9276109 +0000 UTC m=+122.739961618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.434452 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" podStartSLOduration=101.434434558 podStartE2EDuration="1m41.434434558s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.433146871 +0000 UTC m=+122.245497589" watchObservedRunningTime="2025-10-06 15:03:02.434434558 +0000 UTC m=+122.246785276" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.498258 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-ftsgm" podStartSLOduration=101.498238226 podStartE2EDuration="1m41.498238226s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.480408517 +0000 UTC m=+122.292759235" watchObservedRunningTime="2025-10-06 15:03:02.498238226 +0000 UTC m=+122.310588944" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.528635 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.529551 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.029532707 +0000 UTC m=+122.841883425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.556733 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:02 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:02 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:02 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.556815 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.561951 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-wf5nm" podStartSLOduration=101.56192981 podStartE2EDuration="1m41.56192981s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.560464767 +0000 UTC m=+122.372815485" watchObservedRunningTime="2025-10-06 15:03:02.56192981 +0000 UTC m=+122.374280538" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.565979 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6"] Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.630578 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.631014 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.130988191 +0000 UTC m=+122.943338909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.662434 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" event={"ID":"2451070b-304a-439f-a920-27834d657820","Type":"ContainerStarted","Data":"79c701f2b727eeedae1c26a120e460a79157f6d5807d6ed90c6ceb4da8f629e2"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.674538 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" podStartSLOduration=101.674518248 podStartE2EDuration="1m41.674518248s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.634852103 +0000 UTC m=+122.447202831" watchObservedRunningTime="2025-10-06 15:03:02.674518248 +0000 UTC m=+122.486868956" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.675417 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bsv89"] Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.676445 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.690494 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.703663 4888 generic.go:334] "Generic (PLEG): container finished" podID="e9ca1572-99ce-4516-96ae-1a9772e4cb35" containerID="7b27f205f38afd17a23e23c504b8cac7559446dd5eb35f7d02d860147cb9ea46" exitCode=0 Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.703885 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" event={"ID":"e9ca1572-99ce-4516-96ae-1a9772e4cb35","Type":"ContainerDied","Data":"7b27f205f38afd17a23e23c504b8cac7559446dd5eb35f7d02d860147cb9ea46"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.706659 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsv89"] Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.735958 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" event={"ID":"54d3d46a-d4cb-4110-91a1-7e3ab8cba2bb","Type":"ContainerStarted","Data":"9b03db949f27f0e42646bcdf6b8c37ae9a84ab54dbbe1827c2c0bc806c74d6c4"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.736990 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6b8x\" (UniqueName: \"kubernetes.io/projected/43206429-01b9-4d6c-8f90-a2f02ca09a1d-kube-api-access-k6b8x\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.737028 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-catalog-content\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.737051 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.737073 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-utilities\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.737357 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.237345708 +0000 UTC m=+123.049696426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.753774 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" event={"ID":"bc56c25b-5e9c-44dc-a333-14e2aa680f44","Type":"ContainerStarted","Data":"ff30f0122e5587e51380c5df5f2ffd6e483191933ec1d6ad8452fd555903364e"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.754614 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.758985 4888 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7cv44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.759041 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" podUID="bc56c25b-5e9c-44dc-a333-14e2aa680f44" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.768049 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" event={"ID":"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3","Type":"ContainerStarted","Data":"54aafab100500213f9a5670065bc3f126aecdf43d102b536ac744637bba58cef"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.815031 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" event={"ID":"7cb105c3-30b4-4545-8608-416e248b1345","Type":"ContainerStarted","Data":"af33be72be03650a59b0b4ba279070422890f2ee5c69c76f2bcbe3afe7616553"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.815075 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" event={"ID":"7cb105c3-30b4-4545-8608-416e248b1345","Type":"ContainerStarted","Data":"7f95fdcba10db456ca39e4254ce0bbc51442d95200dba768765d6924e909252a"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.819416 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" podStartSLOduration=101.819392706 podStartE2EDuration="1m41.819392706s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:02.819141399 +0000 UTC m=+122.631492127" watchObservedRunningTime="2025-10-06 15:03:02.819392706 +0000 UTC m=+122.631743424" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.822419 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" event={"ID":"314a92e1-2444-4ebd-a080-7619bed44c0d","Type":"ContainerStarted","Data":"d5f6c196c3b659cbd21d04d3f22ae02e6e9b4e727528357bc720b99ad34f45b0"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.855835 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.856107 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6b8x\" (UniqueName: \"kubernetes.io/projected/43206429-01b9-4d6c-8f90-a2f02ca09a1d-kube-api-access-k6b8x\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.856196 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-catalog-content\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.856288 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-utilities\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.856434 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.356414614 +0000 UTC m=+123.168765332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.857569 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-utilities\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.857725 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-catalog-content\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.859862 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" event={"ID":"94d5add2-07ad-4171-bebc-8129f4819ccb","Type":"ContainerStarted","Data":"3d41fcce5d4bb0e8ede27ff83cd5e3d2a2394820c2ce8862df162304a43abbc1"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.861097 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" event={"ID":"7005cda1-a3ae-48c8-80d4-d5d14496e419","Type":"ContainerStarted","Data":"56a7eb4674e4bfb3fbbd4f4a441afdf8004aa0751ae6a79f0ad8ec632e1e188f"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.861880 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-grr76" event={"ID":"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427","Type":"ContainerStarted","Data":"80f0ea9e8f15add5e82213497340b86c3c6d6eb8e495862cf39a65b0ebe5b78b"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.862508 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-29s58" event={"ID":"800c2317-34fe-4640-b0fc-d275fccca804","Type":"ContainerStarted","Data":"9bec099c483d7368f35e1756c38a9e223e58a1bcb0d2ba76d783df6dbef5696b"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.863937 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" event={"ID":"ecd7117d-afeb-4c89-a4ba-0b098f9ca84a","Type":"ContainerStarted","Data":"1eb34ad6f0d768fa27967fba412173e9e36793788a830e1a799d63b3d46ac9f0"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.869916 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pp792" event={"ID":"da1ba4fe-4298-43ab-90f4-24796daca3e4","Type":"ContainerStarted","Data":"61b4ebc01308d28b079a3f0fc58ac3f2c2651e3b12b25fbf481f74ab1d74f599"} Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.871552 4888 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wlzb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.871593 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wlzb" podUID="1fe6b2fe-b6ea-49eb-8f71-552c70f42e37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.872615 4888 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p26ff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.872634 4888 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j8c9b container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.872654 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.872659 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" podUID="661fb82e-5117-41bb-a175-bf72f6c288bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.872928 4888 patch_prober.go:28] interesting pod/console-operator-58897d9998-n956g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.872950 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-n956g" podUID="8befdf1d-c770-4804-bce0-ef5cc8787c8f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.873237 4888 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bphs2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.873253 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.882952 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5nfpw"] Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.884900 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.887474 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cw2lp" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.893674 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.984271 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.984582 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-utilities\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:02 crc kubenswrapper[4888]: E1006 15:03:02.986736 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.486723077 +0000 UTC m=+123.299073795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.987673 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhm7d\" (UniqueName: \"kubernetes.io/projected/b8705687-8217-4b9f-bed8-a293b8a041b0-kube-api-access-fhm7d\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:02 crc kubenswrapper[4888]: I1006 15:03:02.997113 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-catalog-content\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.015135 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6b8x\" (UniqueName: \"kubernetes.io/projected/43206429-01b9-4d6c-8f90-a2f02ca09a1d-kube-api-access-k6b8x\") pod \"certified-operators-bsv89\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.026308 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.041890 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nfpw"] Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.046285 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2ngqv" podStartSLOduration=102.046262811 podStartE2EDuration="1m42.046262811s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:03.002544918 +0000 UTC m=+122.814895646" watchObservedRunningTime="2025-10-06 15:03:03.046262811 +0000 UTC m=+122.858613529" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.059667 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-574kn"] Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.065052 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.079701 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" podStartSLOduration=103.079678283 podStartE2EDuration="1m43.079678283s" podCreationTimestamp="2025-10-06 15:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:03.074883314 +0000 UTC m=+122.887234022" watchObservedRunningTime="2025-10-06 15:03:03.079678283 +0000 UTC m=+122.892029001" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.080728 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-574kn"] Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.102409 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.102717 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-utilities\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.102782 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhm7d\" (UniqueName: \"kubernetes.io/projected/b8705687-8217-4b9f-bed8-a293b8a041b0-kube-api-access-fhm7d\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.102894 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-catalog-content\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.103527 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-catalog-content\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.103784 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-utilities\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.104018 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.603977851 +0000 UTC m=+123.416328579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.155955 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhm7d\" (UniqueName: \"kubernetes.io/projected/b8705687-8217-4b9f-bed8-a293b8a041b0-kube-api-access-fhm7d\") pod \"community-operators-5nfpw\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.205351 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-catalog-content\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.205414 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmq8\" (UniqueName: \"kubernetes.io/projected/ca490a5a-c686-44ac-b282-d260a32fbe71-kube-api-access-nqmq8\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.205482 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.205523 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-utilities\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.205909 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.705894168 +0000 UTC m=+123.518244886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.282716 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n94pp"] Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.283918 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.302296 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n94pp"] Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.317296 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.317541 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-catalog-content\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.317588 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqmq8\" (UniqueName: \"kubernetes.io/projected/ca490a5a-c686-44ac-b282-d260a32fbe71-kube-api-access-nqmq8\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.317675 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-utilities\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.320042 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.820011921 +0000 UTC m=+123.632362719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.321693 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-utilities\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.324265 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-catalog-content\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.394087 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.419224 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-catalog-content\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.419596 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-utilities\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.419619 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bc2\" (UniqueName: \"kubernetes.io/projected/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-kube-api-access-74bc2\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.419648 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.419990 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:03.919977571 +0000 UTC m=+123.732328289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.429346 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqmq8\" (UniqueName: \"kubernetes.io/projected/ca490a5a-c686-44ac-b282-d260a32fbe71-kube-api-access-nqmq8\") pod \"certified-operators-574kn\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.483002 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.504959 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:03 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:03 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:03 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.505023 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.532168 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.532461 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-utilities\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.532493 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74bc2\" (UniqueName: \"kubernetes.io/projected/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-kube-api-access-74bc2\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.532602 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-catalog-content\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.533023 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-catalog-content\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.533090 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.033073813 +0000 UTC m=+123.845424531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.533301 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-utilities\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.577274 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74bc2\" (UniqueName: \"kubernetes.io/projected/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-kube-api-access-74bc2\") pod \"community-operators-n94pp\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.631942 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.637657 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.638042 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.138025349 +0000 UTC m=+123.950376067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.739166 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.739343 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.239312627 +0000 UTC m=+124.051663345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.739549 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.739888 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.239879634 +0000 UTC m=+124.052230352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.846310 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.846702 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.346683464 +0000 UTC m=+124.159034172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:03 crc kubenswrapper[4888]: I1006 15:03:03.950174 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:03 crc kubenswrapper[4888]: E1006 15:03:03.950519 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.450502186 +0000 UTC m=+124.262852904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.025333 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" event={"ID":"314a92e1-2444-4ebd-a080-7619bed44c0d","Type":"ContainerStarted","Data":"72b80e739a8dd9454e213c78446ff88648003cdc09c30f862e2a07450d4fd822"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.038346 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" event={"ID":"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3","Type":"ContainerStarted","Data":"1b545b7fc4018230fc631c1ce9ca332acfa4b353c133646070226c27133d5299"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.038392 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" event={"ID":"1b2b6de3-ab54-4fbd-95bc-a0c64e5c94f3","Type":"ContainerStarted","Data":"eb54352ff0ed5998f21e05292acd3672f908b726600b085a862e054ea4c080ed"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.038918 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.057621 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.058997 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.558971924 +0000 UTC m=+124.371322642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.110843 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" event={"ID":"7cb105c3-30b4-4545-8608-416e248b1345","Type":"ContainerStarted","Data":"3fb18ec5e74078d740b81c1de816980296b0fca3a24d2806e267d21f67c9ee24"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.120140 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" event={"ID":"2451070b-304a-439f-a920-27834d657820","Type":"ContainerStarted","Data":"88f9a321f57b4343e2c0123d7a8dde740c557abb49667c2656eca4909bc24cbf"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.126089 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" event={"ID":"7005cda1-a3ae-48c8-80d4-d5d14496e419","Type":"ContainerStarted","Data":"b9d04bcf76f85e74d8bbbdb9d01ff41da9ddc8cf2841872ec6d3619f4eddae2a"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.126132 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" event={"ID":"7005cda1-a3ae-48c8-80d4-d5d14496e419","Type":"ContainerStarted","Data":"5414677c6a7042c62962b571ac40a30a700a5ea0bb3a74ba10e9679469db2db7"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.152470 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b4qzt" podStartSLOduration=103.152451716 podStartE2EDuration="1m43.152451716s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:04.097360822 +0000 UTC m=+123.909711530" watchObservedRunningTime="2025-10-06 15:03:04.152451716 +0000 UTC m=+123.964802434" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.153104 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" podStartSLOduration=103.153097155 podStartE2EDuration="1m43.153097155s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:04.150244771 +0000 UTC m=+123.962595489" watchObservedRunningTime="2025-10-06 15:03:04.153097155 +0000 UTC m=+123.965447893" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.157437 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" event={"ID":"f1e75e2a-e00f-4299-8d19-83c62c76ed52","Type":"ContainerStarted","Data":"29666fa07857c8e577893d29f09423378b8c76608be40df66ff30f53e13f9da5"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.157481 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" event={"ID":"f1e75e2a-e00f-4299-8d19-83c62c76ed52","Type":"ContainerStarted","Data":"9b429002c1b895f5548a5de303d8eb951807a557e8385c953539698771500542"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.184401 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.185798 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.685781656 +0000 UTC m=+124.498132374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.285746 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.286572 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.78654579 +0000 UTC m=+124.598896508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.335128 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-grr76" event={"ID":"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427","Type":"ContainerStarted","Data":"6f42ecfdb0bb7acbeaab6850d34a19380d7aec4a8c17df7eda4796dc012b322f"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.376017 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-29s58" event={"ID":"800c2317-34fe-4640-b0fc-d275fccca804","Type":"ContainerStarted","Data":"707db82dffcc6bb56a6cfda3f2fc18fcc152a5f157ba0865a63deacf035d1518"} Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.379071 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-q9b62" podStartSLOduration=103.379052364 podStartE2EDuration="1m43.379052364s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:04.342051155 +0000 UTC m=+124.154401883" watchObservedRunningTime="2025-10-06 15:03:04.379052364 +0000 UTC m=+124.191403082" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.379396 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nktzz" podStartSLOduration=103.379389863 podStartE2EDuration="1m43.379389863s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:04.253110556 +0000 UTC m=+124.065461294" watchObservedRunningTime="2025-10-06 15:03:04.379389863 +0000 UTC m=+124.191740591" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.379928 4888 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bphs2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.380041 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.380394 4888 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7cv44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.380475 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" podUID="bc56c25b-5e9c-44dc-a333-14e2aa680f44" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.389205 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.389570 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.889556028 +0000 UTC m=+124.701906746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.495369 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.497307 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:04.997276985 +0000 UTC m=+124.809627703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.521137 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:04 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:04 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:04 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.521238 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.624887 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.625341 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.125327713 +0000 UTC m=+124.937678421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.728965 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.729618 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.229598609 +0000 UTC m=+125.041949327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.831386 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.831718 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.331705471 +0000 UTC m=+125.144056189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.844331 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zt5hl"] Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.845982 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.852857 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.881875 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt5hl"] Oct 06 15:03:04 crc kubenswrapper[4888]: I1006 15:03:04.936268 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:04 crc kubenswrapper[4888]: E1006 15:03:04.936617 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.436598905 +0000 UTC m=+125.248949623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.041228 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9zv\" (UniqueName: \"kubernetes.io/projected/c81c9b40-801b-4b14-84c3-e684bbdae002-kube-api-access-nc9zv\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.041305 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-catalog-content\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.041430 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-utilities\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.041492 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.042013 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.541998483 +0000 UTC m=+125.354349201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.177539 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.178409 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.678381694 +0000 UTC m=+125.490732412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.178519 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-catalog-content\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.178550 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-utilities\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.178571 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.178644 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9zv\" (UniqueName: \"kubernetes.io/projected/c81c9b40-801b-4b14-84c3-e684bbdae002-kube-api-access-nc9zv\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.179303 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-catalog-content\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.179498 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-utilities\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.179700 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.679692182 +0000 UTC m=+125.492042890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.245945 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9zv\" (UniqueName: \"kubernetes.io/projected/c81c9b40-801b-4b14-84c3-e684bbdae002-kube-api-access-nc9zv\") pod \"redhat-marketplace-zt5hl\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.279227 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.279556 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.779539779 +0000 UTC m=+125.591890497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.370476 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5s5kj"] Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.371628 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.400585 4888 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j8c9b container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded" start-of-body= Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.400660 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" podUID="661fb82e-5117-41bb-a175-bf72f6c288bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.401140 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-catalog-content\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.401218 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.401254 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-utilities\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.401289 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbfpf\" (UniqueName: \"kubernetes.io/projected/dfe51362-c625-4856-adbd-6fa6f1380156-kube-api-access-lbfpf\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.402366 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:05.902355114 +0000 UTC m=+125.714705832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.512276 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.512951 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.513210 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-catalog-content\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.513267 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-utilities\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.513299 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbfpf\" (UniqueName: \"kubernetes.io/projected/dfe51362-c625-4856-adbd-6fa6f1380156-kube-api-access-lbfpf\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.527909 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.027876669 +0000 UTC m=+125.840227377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.528803 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-utilities\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.529010 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-catalog-content\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.537225 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:05 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:05 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:05 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.537264 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.540523 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s5kj"] Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.540739 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" event={"ID":"f1e75e2a-e00f-4299-8d19-83c62c76ed52","Type":"ContainerStarted","Data":"100871ba78dd61cc713b4145425b51494d61631481a66c899673bb9a1eded6e2"} Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.559261 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-grr76" event={"ID":"20d7ff7d-bdc3-4bd6-b748-b44da7dc0427","Type":"ContainerStarted","Data":"dd0f1542ac7e1f58de1f5eb01ba54b7057eb05ccbfc453cb27e3bc93b093a339"} Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.559945 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-grr76" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.561044 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" event={"ID":"94d5add2-07ad-4171-bebc-8129f4819ccb","Type":"ContainerStarted","Data":"889a240413e9dd7dad360c9882d46acf9ec223ee65e1a6c7fdd6ab5b0335e92f"} Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.565512 4888 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7cv44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.565554 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" podUID="bc56c25b-5e9c-44dc-a333-14e2aa680f44" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.566036 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" event={"ID":"2451070b-304a-439f-a920-27834d657820","Type":"ContainerStarted","Data":"192f1fefc86bd0338f109e90e39826825dac8a11fc65cda36716263b2d7a4743"} Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.619456 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.620113 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.120093644 +0000 UTC m=+125.932444372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.670800 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbfpf\" (UniqueName: \"kubernetes.io/projected/dfe51362-c625-4856-adbd-6fa6f1380156-kube-api-access-lbfpf\") pod \"redhat-marketplace-5s5kj\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.670888 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bsv89"] Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.728102 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.729091 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.229071337 +0000 UTC m=+126.041422055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.837037 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.837376 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.337364059 +0000 UTC m=+126.149714777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.848045 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.962020 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.963061 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.985349 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:05 crc kubenswrapper[4888]: E1006 15:03:05.985894 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.485870882 +0000 UTC m=+126.298221600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.986224 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.987267 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.998054 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7mdg"] Oct 06 15:03:05 crc kubenswrapper[4888]: I1006 15:03:05.999012 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.010868 4888 patch_prober.go:28] interesting pod/apiserver-76f77b778f-dqtkw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.010925 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" podUID="ecd7117d-afeb-4c89-a4ba-0b098f9ca84a" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.025245 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.056973 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-grr76" podStartSLOduration=12.056953442 podStartE2EDuration="12.056953442s" podCreationTimestamp="2025-10-06 15:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:06.030495482 +0000 UTC m=+125.842846190" watchObservedRunningTime="2025-10-06 15:03:06.056953442 +0000 UTC m=+125.869304160" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.060565 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7mdg"] Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.101399 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fkzl\" (UniqueName: \"kubernetes.io/projected/9b9f33d4-46f4-47e3-a852-9dd264924080-kube-api-access-4fkzl\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.101524 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-catalog-content\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.101544 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-utilities\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.101642 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.103433 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.603421165 +0000 UTC m=+126.415771883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.172678 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.217375 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.217653 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-catalog-content\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.217786 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-utilities\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.217946 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fkzl\" (UniqueName: \"kubernetes.io/projected/9b9f33d4-46f4-47e3-a852-9dd264924080-kube-api-access-4fkzl\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.218606 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.718588238 +0000 UTC m=+126.530938956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.219257 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-catalog-content\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.219538 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-utilities\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.239528 4888 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j8c9b container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.239587 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" podUID="661fb82e-5117-41bb-a175-bf72f6c288bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.308607 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-69vd6" podStartSLOduration=105.308584068 podStartE2EDuration="1m45.308584068s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:06.117443943 +0000 UTC m=+125.929794661" watchObservedRunningTime="2025-10-06 15:03:06.308584068 +0000 UTC m=+126.120934786" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.320031 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.321193 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.821181105 +0000 UTC m=+126.633531823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.322843 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jmkzm"] Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.324284 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.392132 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fkzl\" (UniqueName: \"kubernetes.io/projected/9b9f33d4-46f4-47e3-a852-9dd264924080-kube-api-access-4fkzl\") pod \"redhat-operators-p7mdg\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.403359 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5vzr4" podStartSLOduration=105.403340606 podStartE2EDuration="1m45.403340606s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:06.346468951 +0000 UTC m=+126.158819659" watchObservedRunningTime="2025-10-06 15:03:06.403340606 +0000 UTC m=+126.215691314" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.404465 4888 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j8c9b container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.404549 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" podUID="661fb82e-5117-41bb-a175-bf72f6c288bd" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.419461 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmkzm"] Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.421429 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.421911 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-catalog-content\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.422061 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-utilities\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.422251 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zsz\" (UniqueName: \"kubernetes.io/projected/98174955-9911-4a94-8149-a6572031b287-kube-api-access-c4zsz\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.422536 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:06.922511495 +0000 UTC m=+126.734862213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.509463 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:06 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:06 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:06 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.511307 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4k4rc" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.509542 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.524400 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-catalog-content\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.524443 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-utilities\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.524467 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.524508 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zsz\" (UniqueName: \"kubernetes.io/projected/98174955-9911-4a94-8149-a6572031b287-kube-api-access-c4zsz\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.525118 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-catalog-content\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.525310 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-utilities\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.525504 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.025486613 +0000 UTC m=+126.837837321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.603146 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsv89" event={"ID":"43206429-01b9-4d6c-8f90-a2f02ca09a1d","Type":"ContainerStarted","Data":"9a0a808c99a6dc6bc5827169a791cc0086cf6a2441a1c8639a24b7b6fcc7e5cf"} Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.620107 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zsz\" (UniqueName: \"kubernetes.io/projected/98174955-9911-4a94-8149-a6572031b287-kube-api-access-c4zsz\") pod \"redhat-operators-jmkzm\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.628818 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.632973 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.132941491 +0000 UTC m=+126.945292209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.638181 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.643713 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.645227 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.145204018 +0000 UTC m=+126.957554736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.658696 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-29s58" event={"ID":"800c2317-34fe-4640-b0fc-d275fccca804","Type":"ContainerStarted","Data":"a68d54a8010b974cabc8a2d516e12f3a802f7271071fb173bddfbf55fd84414b"} Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.715225 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.728895 4888 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wlzb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.728946 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wlzb" podUID="1fe6b2fe-b6ea-49eb-8f71-552c70f42e37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.728980 4888 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wlzb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.729042 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wlzb" podUID="1fe6b2fe-b6ea-49eb-8f71-552c70f42e37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.739602 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.740548 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.240527423 +0000 UTC m=+127.052878141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.760275 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.760314 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.782050 4888 patch_prober.go:28] interesting pod/console-f9d7485db-vrp9q container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.782133 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vrp9q" podUID="20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.840928 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.843323 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.343308795 +0000 UTC m=+127.155659503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.856419 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:03:06 crc kubenswrapper[4888]: I1006 15:03:06.952396 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:06 crc kubenswrapper[4888]: E1006 15:03:06.952718 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.45270018 +0000 UTC m=+127.265050888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.061112 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.061415 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.561402745 +0000 UTC m=+127.373753463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.084442 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.140837 4888 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p26ff container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.140882 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.140950 4888 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p26ff container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.140963 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.166387 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.166713 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.66669413 +0000 UTC m=+127.479044848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.267320 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdfhn\" (UniqueName: \"kubernetes.io/projected/e9ca1572-99ce-4516-96ae-1a9772e4cb35-kube-api-access-vdfhn\") pod \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.268532 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9ca1572-99ce-4516-96ae-1a9772e4cb35-config-volume\") pod \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.268617 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9ca1572-99ce-4516-96ae-1a9772e4cb35-secret-volume\") pod \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\" (UID: \"e9ca1572-99ce-4516-96ae-1a9772e4cb35\") " Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.268869 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.269269 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.769256026 +0000 UTC m=+127.581606734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.271362 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9ca1572-99ce-4516-96ae-1a9772e4cb35-config-volume" (OuterVolumeSpecName: "config-volume") pod "e9ca1572-99ce-4516-96ae-1a9772e4cb35" (UID: "e9ca1572-99ce-4516-96ae-1a9772e4cb35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.286203 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ca1572-99ce-4516-96ae-1a9772e4cb35-kube-api-access-vdfhn" (OuterVolumeSpecName: "kube-api-access-vdfhn") pod "e9ca1572-99ce-4516-96ae-1a9772e4cb35" (UID: "e9ca1572-99ce-4516-96ae-1a9772e4cb35"). InnerVolumeSpecName "kube-api-access-vdfhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.286479 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ca1572-99ce-4516-96ae-1a9772e4cb35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e9ca1572-99ce-4516-96ae-1a9772e4cb35" (UID: "e9ca1572-99ce-4516-96ae-1a9772e4cb35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.318965 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.375704 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.376096 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.876076346 +0000 UTC m=+127.688427064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.376210 4888 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e9ca1572-99ce-4516-96ae-1a9772e4cb35-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.376228 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdfhn\" (UniqueName: \"kubernetes.io/projected/e9ca1572-99ce-4516-96ae-1a9772e4cb35-kube-api-access-vdfhn\") on node \"crc\" DevicePath \"\"" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.376237 4888 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9ca1572-99ce-4516-96ae-1a9772e4cb35-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.486665 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.487060 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:07.987042527 +0000 UTC m=+127.799393245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.486651 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-574kn"] Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.493880 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.498784 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:07 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:07 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:07 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.498834 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.515441 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n94pp"] Oct 06 15:03:07 crc kubenswrapper[4888]: W1006 15:03:07.548183 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5e23ea0_bdeb_4f81_882f_fce7b54b73d9.slice/crio-56ba9d91dd4e023862be6d4df7334ee9ec9b1d7e6f17c1d384678a20f0e74b55 WatchSource:0}: Error finding container 56ba9d91dd4e023862be6d4df7334ee9ec9b1d7e6f17c1d384678a20f0e74b55: Status 404 returned error can't find the container with id 56ba9d91dd4e023862be6d4df7334ee9ec9b1d7e6f17c1d384678a20f0e74b55 Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.587529 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.587655 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.087614295 +0000 UTC m=+127.899965013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.587711 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.588553 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.088545642 +0000 UTC m=+127.900896360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.627458 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-29s58" podStartSLOduration=106.627435815 podStartE2EDuration="1m46.627435815s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:07.51494134 +0000 UTC m=+127.327292048" watchObservedRunningTime="2025-10-06 15:03:07.627435815 +0000 UTC m=+127.439786523" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.633454 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nfpw"] Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.691091 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.691821 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.191779767 +0000 UTC m=+128.004130495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.704894 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n94pp" event={"ID":"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9","Type":"ContainerStarted","Data":"56ba9d91dd4e023862be6d4df7334ee9ec9b1d7e6f17c1d384678a20f0e74b55"} Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.752953 4888 generic.go:334] "Generic (PLEG): container finished" podID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerID="3dffa297d91836fa33dab47c3122f6e935e57ba526cd5f165717ed0133fb860a" exitCode=0 Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.753267 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsv89" event={"ID":"43206429-01b9-4d6c-8f90-a2f02ca09a1d","Type":"ContainerDied","Data":"3dffa297d91836fa33dab47c3122f6e935e57ba526cd5f165717ed0133fb860a"} Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.768582 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.803637 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.804421 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.304391866 +0000 UTC m=+128.116742584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.815042 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" event={"ID":"94d5add2-07ad-4171-bebc-8129f4819ccb","Type":"ContainerStarted","Data":"ed1b40221e3525e5d65df87e077577de37f20bc352a26e2e0a0c3e0542f0fe6d"} Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.875532 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" event={"ID":"e9ca1572-99ce-4516-96ae-1a9772e4cb35","Type":"ContainerDied","Data":"3934e339f36456e964f863234768ded580884e42fa9f31ee7b785252ad69e3f9"} Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.875578 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3934e339f36456e964f863234768ded580884e42fa9f31ee7b785252ad69e3f9" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.875682 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.899087 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-574kn" event={"ID":"ca490a5a-c686-44ac-b282-d260a32fbe71","Type":"ContainerStarted","Data":"d48b0499b29a31fc98232b7352767ab302a23079454df272db496cea1a2bbb13"} Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.908665 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:07 crc kubenswrapper[4888]: E1006 15:03:07.917453 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.417425257 +0000 UTC m=+128.229775975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.932921 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvbpw" Oct 06 15:03:07 crc kubenswrapper[4888]: I1006 15:03:07.945032 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt5hl"] Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.011095 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.011743 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.511726403 +0000 UTC m=+128.324077121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.114346 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.114934 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.614907806 +0000 UTC m=+128.427258534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.215123 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmkzm"] Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.218655 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.218995 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.718981706 +0000 UTC m=+128.531332424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.220346 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j8c9b" Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.265933 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s5kj"] Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.320157 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.320512 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.820491771 +0000 UTC m=+128.632842489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.335895 4888 patch_prober.go:28] interesting pod/console-operator-58897d9998-n956g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.335959 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-n956g" podUID="8befdf1d-c770-4804-bce0-ef5cc8787c8f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.336459 4888 patch_prober.go:28] interesting pod/console-operator-58897d9998-n956g container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.336531 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-n956g" podUID="8befdf1d-c770-4804-bce0-ef5cc8787c8f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.423811 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.424419 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:08.924405707 +0000 UTC m=+128.736756425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.512466 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:08 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:08 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:08 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.512518 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.532354 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.532854 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.032836063 +0000 UTC m=+128.845186781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.589953 4888 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7cv44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.590916 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" podUID="bc56c25b-5e9c-44dc-a333-14e2aa680f44" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.591053 4888 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7cv44 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.591073 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" podUID="bc56c25b-5e9c-44dc-a333-14e2aa680f44" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.634627 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.635076 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.13506183 +0000 UTC m=+128.947412558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.735375 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.735581 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.235551195 +0000 UTC m=+129.047901913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.735674 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.735972 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.235960767 +0000 UTC m=+129.048311485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.811492 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7mdg"] Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.851598 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.852198 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.352175501 +0000 UTC m=+129.164526219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: W1006 15:03:08.912148 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b9f33d4_46f4_47e3_a852_9dd264924080.slice/crio-ab71b6408f0d3cd061e977c6756b63d74638b00b242aadbfee07eb9dda804b85 WatchSource:0}: Error finding container ab71b6408f0d3cd061e977c6756b63d74638b00b242aadbfee07eb9dda804b85: Status 404 returned error can't find the container with id ab71b6408f0d3cd061e977c6756b63d74638b00b242aadbfee07eb9dda804b85 Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.941740 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" event={"ID":"94d5add2-07ad-4171-bebc-8129f4819ccb","Type":"ContainerStarted","Data":"8f4296f4bca353a7f3659ea4f50f576d322d8c717100115eb01c6c9bd41cab0d"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.952830 4888 generic.go:334] "Generic (PLEG): container finished" podID="dfe51362-c625-4856-adbd-6fa6f1380156" containerID="cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2" exitCode=0 Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.952947 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s5kj" event={"ID":"dfe51362-c625-4856-adbd-6fa6f1380156","Type":"ContainerDied","Data":"cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.952974 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s5kj" event={"ID":"dfe51362-c625-4856-adbd-6fa6f1380156","Type":"ContainerStarted","Data":"81632351b3f971b4dd94c05eb4e2f2b280556223b8d67ab28a72fbb6917371f4"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.954553 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:08 crc kubenswrapper[4888]: E1006 15:03:08.955012 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.454996494 +0000 UTC m=+129.267347212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.960605 4888 generic.go:334] "Generic (PLEG): container finished" podID="98174955-9911-4a94-8149-a6572031b287" containerID="f9f1a3f94b4ee62b7b4cc4abc9bc2d7668b4ce7bf7ce7554329dc577cb9c801d" exitCode=0 Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.960702 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmkzm" event={"ID":"98174955-9911-4a94-8149-a6572031b287","Type":"ContainerDied","Data":"f9f1a3f94b4ee62b7b4cc4abc9bc2d7668b4ce7bf7ce7554329dc577cb9c801d"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.960729 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmkzm" event={"ID":"98174955-9911-4a94-8149-a6572031b287","Type":"ContainerStarted","Data":"bcf179e9e87cc1cd6d83d50470f9a7dc595aba25b8996cb2907e6dfeabb04c11"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.979978 4888 generic.go:334] "Generic (PLEG): container finished" podID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerID="4ab00d63ab344fd20927a5639bfcf2774a0c888643e538d7a1e2c570d3f14688" exitCode=0 Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.980193 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt5hl" event={"ID":"c81c9b40-801b-4b14-84c3-e684bbdae002","Type":"ContainerDied","Data":"4ab00d63ab344fd20927a5639bfcf2774a0c888643e538d7a1e2c570d3f14688"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.980249 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt5hl" event={"ID":"c81c9b40-801b-4b14-84c3-e684bbdae002","Type":"ContainerStarted","Data":"2ddc02d2afc764b938c860e1794b46bb2ee0d0aae80e28abff25b076a662b301"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.982092 4888 generic.go:334] "Generic (PLEG): container finished" podID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerID="409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578" exitCode=0 Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.982140 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-574kn" event={"ID":"ca490a5a-c686-44ac-b282-d260a32fbe71","Type":"ContainerDied","Data":"409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.992691 4888 generic.go:334] "Generic (PLEG): container finished" podID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerID="7822273e1cfbcc6512d318b1e804056286ed4a87c0cf51de7a3c3910a53c2550" exitCode=0 Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.992814 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nfpw" event={"ID":"b8705687-8217-4b9f-bed8-a293b8a041b0","Type":"ContainerDied","Data":"7822273e1cfbcc6512d318b1e804056286ed4a87c0cf51de7a3c3910a53c2550"} Oct 06 15:03:08 crc kubenswrapper[4888]: I1006 15:03:08.992852 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nfpw" event={"ID":"b8705687-8217-4b9f-bed8-a293b8a041b0","Type":"ContainerStarted","Data":"fc763b7355175edbde3b88862949b47412fe31a7422ae32880fe140f5fec3874"} Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.014203 4888 generic.go:334] "Generic (PLEG): container finished" podID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerID="e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101" exitCode=0 Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.022061 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n94pp" event={"ID":"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9","Type":"ContainerDied","Data":"e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101"} Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.065218 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.080309 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.080664 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ca1572-99ce-4516-96ae-1a9772e4cb35" containerName="collect-profiles" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.080681 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ca1572-99ce-4516-96ae-1a9772e4cb35" containerName="collect-profiles" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.080893 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ca1572-99ce-4516-96ae-1a9772e4cb35" containerName="collect-profiles" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.081399 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.089124 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.589086958 +0000 UTC m=+129.401437676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.089434 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.089922 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.589910522 +0000 UTC m=+129.402261240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.128314 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.133351 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.137662 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.206403 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.206570 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.706543877 +0000 UTC m=+129.518894595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.206735 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db1404fa-72d9-4ceb-a8ca-3c958b110505-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"db1404fa-72d9-4ceb-a8ca-3c958b110505\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.206772 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db1404fa-72d9-4ceb-a8ca-3c958b110505-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"db1404fa-72d9-4ceb-a8ca-3c958b110505\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.206832 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.207596 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.707588377 +0000 UTC m=+129.519939085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.308387 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.308612 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db1404fa-72d9-4ceb-a8ca-3c958b110505-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"db1404fa-72d9-4ceb-a8ca-3c958b110505\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.308640 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db1404fa-72d9-4ceb-a8ca-3c958b110505-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"db1404fa-72d9-4ceb-a8ca-3c958b110505\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.308816 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db1404fa-72d9-4ceb-a8ca-3c958b110505-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"db1404fa-72d9-4ceb-a8ca-3c958b110505\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.308897 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.808878597 +0000 UTC m=+129.621229315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.388570 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db1404fa-72d9-4ceb-a8ca-3c958b110505-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"db1404fa-72d9-4ceb-a8ca-3c958b110505\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.411914 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.412300 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:09.912288047 +0000 UTC m=+129.724638765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.432065 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.500156 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:09 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:09 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:09 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.500212 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.514185 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.514285 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.014264736 +0000 UTC m=+129.826615454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.514546 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.514929 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.014921145 +0000 UTC m=+129.827271863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.616202 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.616648 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.116626776 +0000 UTC m=+129.928977494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.717278 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.717755 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.21773729 +0000 UTC m=+130.030088008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.818056 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.818444 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.318424971 +0000 UTC m=+130.130775689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:09 crc kubenswrapper[4888]: I1006 15:03:09.920165 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:09 crc kubenswrapper[4888]: E1006 15:03:09.920702 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.420689708 +0000 UTC m=+130.233040426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.025825 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.026198 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.526178479 +0000 UTC m=+130.338529197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.115490 4888 generic.go:334] "Generic (PLEG): container finished" podID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerID="0597412d9bacd7c1c357c85d69af10ebe3f6aea8253178dd3c8f66b6f0a6cfbe" exitCode=0 Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.115557 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7mdg" event={"ID":"9b9f33d4-46f4-47e3-a852-9dd264924080","Type":"ContainerDied","Data":"0597412d9bacd7c1c357c85d69af10ebe3f6aea8253178dd3c8f66b6f0a6cfbe"} Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.115633 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7mdg" event={"ID":"9b9f33d4-46f4-47e3-a852-9dd264924080","Type":"ContainerStarted","Data":"ab71b6408f0d3cd061e977c6756b63d74638b00b242aadbfee07eb9dda804b85"} Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.129642 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.129978 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.629965731 +0000 UTC m=+130.442316449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.230296 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.231400 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.731378414 +0000 UTC m=+130.543729132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.332507 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.332936 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.83292028 +0000 UTC m=+130.645270998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.433444 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.433774 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:10.933755846 +0000 UTC m=+130.746106554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.511097 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:10 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:10 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:10 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.511166 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.514963 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.535042 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.535320 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.035308473 +0000 UTC m=+130.847659191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: W1006 15:03:10.601520 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddb1404fa_72d9_4ceb_a8ca_3c958b110505.slice/crio-b03cf164f6b9324ad73c003523c7f104cdf0eaa644f9625d334621b766dd79b8 WatchSource:0}: Error finding container b03cf164f6b9324ad73c003523c7f104cdf0eaa644f9625d334621b766dd79b8: Status 404 returned error can't find the container with id b03cf164f6b9324ad73c003523c7f104cdf0eaa644f9625d334621b766dd79b8 Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.639448 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.639914 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.139895737 +0000 UTC m=+130.952246455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.681967 4888 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.741677 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.742097 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.242077012 +0000 UTC m=+131.054427720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.842535 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.842740 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.342706411 +0000 UTC m=+131.155057129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.842890 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.843202 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.343194996 +0000 UTC m=+131.155545714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:10 crc kubenswrapper[4888]: I1006 15:03:10.947440 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:10 crc kubenswrapper[4888]: E1006 15:03:10.992542 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.492508773 +0000 UTC m=+131.304859491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.040895 4888 patch_prober.go:28] interesting pod/apiserver-76f77b778f-dqtkw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]log ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]etcd ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/generic-apiserver-start-informers ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/max-in-flight-filter ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 06 15:03:11 crc kubenswrapper[4888]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 06 15:03:11 crc kubenswrapper[4888]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/project.openshift.io-projectcache ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/openshift.io-startinformers ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 06 15:03:11 crc kubenswrapper[4888]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 06 15:03:11 crc kubenswrapper[4888]: livez check failed Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.040987 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" podUID="ecd7117d-afeb-4c89-a4ba-0b098f9ca84a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.093435 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:11 crc kubenswrapper[4888]: E1006 15:03:11.093750 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.59373675 +0000 UTC m=+131.406087468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.118059 4888 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-06T15:03:10.681997673Z","Handler":null,"Name":""} Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.195300 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:11 crc kubenswrapper[4888]: E1006 15:03:11.195623 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.695600715 +0000 UTC m=+131.507951423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.250051 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"db1404fa-72d9-4ceb-a8ca-3c958b110505","Type":"ContainerStarted","Data":"b03cf164f6b9324ad73c003523c7f104cdf0eaa644f9625d334621b766dd79b8"} Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.270138 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" event={"ID":"94d5add2-07ad-4171-bebc-8129f4819ccb","Type":"ContainerStarted","Data":"4543782205d907fc61c752752983e3f01a92ac0d96fa53f2e1b19776de7938c7"} Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.301689 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:11 crc kubenswrapper[4888]: E1006 15:03:11.302051 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 15:03:11.802037185 +0000 UTC m=+131.614387893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j2xlv" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.324591 4888 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.324644 4888 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.326013 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-s5g2l" podStartSLOduration=17.325994201 podStartE2EDuration="17.325994201s" podCreationTimestamp="2025-10-06 15:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:11.32523899 +0000 UTC m=+131.137589708" watchObservedRunningTime="2025-10-06 15:03:11.325994201 +0000 UTC m=+131.138344919" Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.403955 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.483903 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.501011 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:11 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:11 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:11 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.501069 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.507406 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.529160 4888 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.529204 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:11 crc kubenswrapper[4888]: I1006 15:03:11.864849 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j2xlv\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:12 crc kubenswrapper[4888]: I1006 15:03:12.030585 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 06 15:03:12 crc kubenswrapper[4888]: I1006 15:03:12.034876 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:12 crc kubenswrapper[4888]: I1006 15:03:12.416821 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"db1404fa-72d9-4ceb-a8ca-3c958b110505","Type":"ContainerStarted","Data":"f086fa592e3d5f2ad7ba739a5a8078a26a2d75be8a5c7f8669be3a143d6986ec"} Oct 06 15:03:12 crc kubenswrapper[4888]: I1006 15:03:12.447455 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.447437711 podStartE2EDuration="4.447437711s" podCreationTimestamp="2025-10-06 15:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:12.446148373 +0000 UTC m=+132.258499081" watchObservedRunningTime="2025-10-06 15:03:12.447437711 +0000 UTC m=+132.259788429" Oct 06 15:03:12 crc kubenswrapper[4888]: I1006 15:03:12.518736 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:12 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:12 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:12 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:12 crc kubenswrapper[4888]: I1006 15:03:12.518787 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:12 crc kubenswrapper[4888]: I1006 15:03:12.666973 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j2xlv"] Oct 06 15:03:12 crc kubenswrapper[4888]: I1006 15:03:12.964540 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 06 15:03:13 crc kubenswrapper[4888]: I1006 15:03:13.468426 4888 generic.go:334] "Generic (PLEG): container finished" podID="db1404fa-72d9-4ceb-a8ca-3c958b110505" containerID="f086fa592e3d5f2ad7ba739a5a8078a26a2d75be8a5c7f8669be3a143d6986ec" exitCode=0 Oct 06 15:03:13 crc kubenswrapper[4888]: I1006 15:03:13.468484 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"db1404fa-72d9-4ceb-a8ca-3c958b110505","Type":"ContainerDied","Data":"f086fa592e3d5f2ad7ba739a5a8078a26a2d75be8a5c7f8669be3a143d6986ec"} Oct 06 15:03:13 crc kubenswrapper[4888]: I1006 15:03:13.485315 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" event={"ID":"8fff9bc3-9673-46a4-8f88-56f9c24e16f1","Type":"ContainerStarted","Data":"29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4"} Oct 06 15:03:13 crc kubenswrapper[4888]: I1006 15:03:13.485361 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" event={"ID":"8fff9bc3-9673-46a4-8f88-56f9c24e16f1","Type":"ContainerStarted","Data":"5192c4fe6946138317283e91dd730c0b3183b38bb1e052f25b075e47e5faaa41"} Oct 06 15:03:13 crc kubenswrapper[4888]: I1006 15:03:13.485580 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:13 crc kubenswrapper[4888]: I1006 15:03:13.496670 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:13 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:13 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:13 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:13 crc kubenswrapper[4888]: I1006 15:03:13.496936 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.495895 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:14 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:14 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:14 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.495961 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.827025 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.845488 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" podStartSLOduration=113.845457131 podStartE2EDuration="1m53.845457131s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:13.518150053 +0000 UTC m=+133.330500791" watchObservedRunningTime="2025-10-06 15:03:14.845457131 +0000 UTC m=+134.657807849" Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.891994 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db1404fa-72d9-4ceb-a8ca-3c958b110505-kubelet-dir\") pod \"db1404fa-72d9-4ceb-a8ca-3c958b110505\" (UID: \"db1404fa-72d9-4ceb-a8ca-3c958b110505\") " Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.892064 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db1404fa-72d9-4ceb-a8ca-3c958b110505-kube-api-access\") pod \"db1404fa-72d9-4ceb-a8ca-3c958b110505\" (UID: \"db1404fa-72d9-4ceb-a8ca-3c958b110505\") " Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.893859 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db1404fa-72d9-4ceb-a8ca-3c958b110505-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "db1404fa-72d9-4ceb-a8ca-3c958b110505" (UID: "db1404fa-72d9-4ceb-a8ca-3c958b110505"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.899272 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db1404fa-72d9-4ceb-a8ca-3c958b110505-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "db1404fa-72d9-4ceb-a8ca-3c958b110505" (UID: "db1404fa-72d9-4ceb-a8ca-3c958b110505"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.997010 4888 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db1404fa-72d9-4ceb-a8ca-3c958b110505-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 06 15:03:14 crc kubenswrapper[4888]: I1006 15:03:14.997043 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db1404fa-72d9-4ceb-a8ca-3c958b110505-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 15:03:15 crc kubenswrapper[4888]: I1006 15:03:15.499932 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:15 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:15 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:15 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:15 crc kubenswrapper[4888]: I1006 15:03:15.500038 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:15 crc kubenswrapper[4888]: I1006 15:03:15.533174 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"db1404fa-72d9-4ceb-a8ca-3c958b110505","Type":"ContainerDied","Data":"b03cf164f6b9324ad73c003523c7f104cdf0eaa644f9625d334621b766dd79b8"} Oct 06 15:03:15 crc kubenswrapper[4888]: I1006 15:03:15.533226 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b03cf164f6b9324ad73c003523c7f104cdf0eaa644f9625d334621b766dd79b8" Oct 06 15:03:15 crc kubenswrapper[4888]: I1006 15:03:15.533297 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 15:03:15 crc kubenswrapper[4888]: I1006 15:03:15.649664 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-grr76" Oct 06 15:03:15 crc kubenswrapper[4888]: I1006 15:03:15.984402 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:03:15 crc kubenswrapper[4888]: I1006 15:03:15.989242 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-dqtkw" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.322757 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 15:03:16 crc kubenswrapper[4888]: E1006 15:03:16.323126 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db1404fa-72d9-4ceb-a8ca-3c958b110505" containerName="pruner" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.323145 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="db1404fa-72d9-4ceb-a8ca-3c958b110505" containerName="pruner" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.323297 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="db1404fa-72d9-4ceb-a8ca-3c958b110505" containerName="pruner" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.323749 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.329233 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.329630 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.333127 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.432433 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.432493 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.503222 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:16 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:16 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:16 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.503314 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.534939 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.535083 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.535157 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.580864 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.676956 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.721280 4888 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wlzb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.721332 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6wlzb" podUID="1fe6b2fe-b6ea-49eb-8f71-552c70f42e37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.721424 4888 patch_prober.go:28] interesting pod/downloads-7954f5f757-6wlzb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.721559 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6wlzb" podUID="1fe6b2fe-b6ea-49eb-8f71-552c70f42e37" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.747164 4888 patch_prober.go:28] interesting pod/console-f9d7485db-vrp9q container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Oct 06 15:03:16 crc kubenswrapper[4888]: I1006 15:03:16.747245 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vrp9q" podUID="20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Oct 06 15:03:17 crc kubenswrapper[4888]: I1006 15:03:17.147258 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:03:17 crc kubenswrapper[4888]: I1006 15:03:17.239461 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 15:03:17 crc kubenswrapper[4888]: W1006 15:03:17.288174 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod975300d5_f80e_44ba_b4b2_0c322b16d8b5.slice/crio-f854639179e608756a3b6258a1f360c4212e95c56f5a078b6607cb613e771299 WatchSource:0}: Error finding container f854639179e608756a3b6258a1f360c4212e95c56f5a078b6607cb613e771299: Status 404 returned error can't find the container with id f854639179e608756a3b6258a1f360c4212e95c56f5a078b6607cb613e771299 Oct 06 15:03:17 crc kubenswrapper[4888]: I1006 15:03:17.345119 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-n956g" Oct 06 15:03:17 crc kubenswrapper[4888]: I1006 15:03:17.510139 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:17 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:17 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:17 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:17 crc kubenswrapper[4888]: I1006 15:03:17.510237 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:17 crc kubenswrapper[4888]: I1006 15:03:17.587076 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"975300d5-f80e-44ba-b4b2-0c322b16d8b5","Type":"ContainerStarted","Data":"f854639179e608756a3b6258a1f360c4212e95c56f5a078b6607cb613e771299"} Oct 06 15:03:17 crc kubenswrapper[4888]: I1006 15:03:17.593912 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7cv44" Oct 06 15:03:18 crc kubenswrapper[4888]: I1006 15:03:18.499263 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:18 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:18 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:18 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:18 crc kubenswrapper[4888]: I1006 15:03:18.499687 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:19 crc kubenswrapper[4888]: I1006 15:03:19.496324 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:19 crc kubenswrapper[4888]: [-]has-synced failed: reason withheld Oct 06 15:03:19 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:19 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:19 crc kubenswrapper[4888]: I1006 15:03:19.496393 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:19 crc kubenswrapper[4888]: I1006 15:03:19.635202 4888 generic.go:334] "Generic (PLEG): container finished" podID="975300d5-f80e-44ba-b4b2-0c322b16d8b5" containerID="e82359c7b84969fac9b40d0a4f1668ea4ada4060e5b64b010cee372acb142a02" exitCode=0 Oct 06 15:03:19 crc kubenswrapper[4888]: I1006 15:03:19.635406 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"975300d5-f80e-44ba-b4b2-0c322b16d8b5","Type":"ContainerDied","Data":"e82359c7b84969fac9b40d0a4f1668ea4ada4060e5b64b010cee372acb142a02"} Oct 06 15:03:20 crc kubenswrapper[4888]: I1006 15:03:20.495887 4888 patch_prober.go:28] interesting pod/router-default-5444994796-t5brn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 15:03:20 crc kubenswrapper[4888]: [+]has-synced ok Oct 06 15:03:20 crc kubenswrapper[4888]: [+]process-running ok Oct 06 15:03:20 crc kubenswrapper[4888]: healthz check failed Oct 06 15:03:20 crc kubenswrapper[4888]: I1006 15:03:20.496196 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t5brn" podUID="5919f36e-dcc7-439b-9660-63f7b8c32b5a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:03:21 crc kubenswrapper[4888]: I1006 15:03:21.501999 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:03:21 crc kubenswrapper[4888]: I1006 15:03:21.515067 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-t5brn" Oct 06 15:03:22 crc kubenswrapper[4888]: I1006 15:03:22.683133 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-4wdjj_1d6dbd47-5f20-486f-864c-7042f45c2ab4/cluster-samples-operator/0.log" Oct 06 15:03:22 crc kubenswrapper[4888]: I1006 15:03:22.683340 4888 generic.go:334] "Generic (PLEG): container finished" podID="1d6dbd47-5f20-486f-864c-7042f45c2ab4" containerID="03f626845378fcd27b00bd4848a533136e04017e4c8bf45b1c9a5c9b1758fbf6" exitCode=2 Oct 06 15:03:22 crc kubenswrapper[4888]: I1006 15:03:22.683369 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" event={"ID":"1d6dbd47-5f20-486f-864c-7042f45c2ab4","Type":"ContainerDied","Data":"03f626845378fcd27b00bd4848a533136e04017e4c8bf45b1c9a5c9b1758fbf6"} Oct 06 15:03:22 crc kubenswrapper[4888]: I1006 15:03:22.683838 4888 scope.go:117] "RemoveContainer" containerID="03f626845378fcd27b00bd4848a533136e04017e4c8bf45b1c9a5c9b1758fbf6" Oct 06 15:03:23 crc kubenswrapper[4888]: I1006 15:03:23.856725 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:03:26 crc kubenswrapper[4888]: I1006 15:03:26.727028 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6wlzb" Oct 06 15:03:26 crc kubenswrapper[4888]: I1006 15:03:26.756193 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:03:26 crc kubenswrapper[4888]: I1006 15:03:26.760336 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.773478 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.773920 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.775669 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.775677 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.792156 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.874584 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.874697 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.876308 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.887028 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.900419 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:03:28 crc kubenswrapper[4888]: I1006 15:03:28.901111 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.056987 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.147725 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.154402 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.164587 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.369631 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.381881 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kube-api-access\") pod \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\" (UID: \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\") " Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.381948 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kubelet-dir\") pod \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\" (UID: \"975300d5-f80e-44ba-b4b2-0c322b16d8b5\") " Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.382102 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "975300d5-f80e-44ba-b4b2-0c322b16d8b5" (UID: "975300d5-f80e-44ba-b4b2-0c322b16d8b5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.382528 4888 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.387726 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "975300d5-f80e-44ba-b4b2-0c322b16d8b5" (UID: "975300d5-f80e-44ba-b4b2-0c322b16d8b5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.484231 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/975300d5-f80e-44ba-b4b2-0c322b16d8b5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.759509 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"975300d5-f80e-44ba-b4b2-0c322b16d8b5","Type":"ContainerDied","Data":"f854639179e608756a3b6258a1f360c4212e95c56f5a078b6607cb613e771299"} Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.759559 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f854639179e608756a3b6258a1f360c4212e95c56f5a078b6607cb613e771299" Oct 06 15:03:29 crc kubenswrapper[4888]: I1006 15:03:29.759579 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 15:03:32 crc kubenswrapper[4888]: I1006 15:03:32.041448 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:03:32 crc kubenswrapper[4888]: I1006 15:03:32.563857 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:03:32 crc kubenswrapper[4888]: I1006 15:03:32.564258 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:03:37 crc kubenswrapper[4888]: I1006 15:03:37.881976 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gj6m8" Oct 06 15:03:43 crc kubenswrapper[4888]: I1006 15:03:43.054434 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:03:43 crc kubenswrapper[4888]: I1006 15:03:43.056579 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 06 15:03:43 crc kubenswrapper[4888]: I1006 15:03:43.074318 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2aee40f4-3a30-43cb-aa49-aabcf3c074b7-metrics-certs\") pod \"network-metrics-daemon-hm59m\" (UID: \"2aee40f4-3a30-43cb-aa49-aabcf3c074b7\") " pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:03:43 crc kubenswrapper[4888]: I1006 15:03:43.273818 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 06 15:03:43 crc kubenswrapper[4888]: I1006 15:03:43.283206 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hm59m" Oct 06 15:03:43 crc kubenswrapper[4888]: E1006 15:03:43.860661 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 06 15:03:43 crc kubenswrapper[4888]: E1006 15:03:43.861018 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74bc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-n94pp_openshift-marketplace(b5e23ea0-bdeb-4f81-882f-fce7b54b73d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 15:03:43 crc kubenswrapper[4888]: E1006 15:03:43.862551 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-n94pp" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" Oct 06 15:03:46 crc kubenswrapper[4888]: E1006 15:03:46.152434 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-n94pp" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" Oct 06 15:03:46 crc kubenswrapper[4888]: E1006 15:03:46.721783 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 06 15:03:46 crc kubenswrapper[4888]: E1006 15:03:46.722208 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6b8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bsv89_openshift-marketplace(43206429-01b9-4d6c-8f90-a2f02ca09a1d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 15:03:46 crc kubenswrapper[4888]: E1006 15:03:46.723378 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bsv89" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" Oct 06 15:03:47 crc kubenswrapper[4888]: E1006 15:03:47.024450 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 06 15:03:47 crc kubenswrapper[4888]: E1006 15:03:47.024624 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc9zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zt5hl_openshift-marketplace(c81c9b40-801b-4b14-84c3-e684bbdae002): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 15:03:47 crc kubenswrapper[4888]: E1006 15:03:47.025808 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zt5hl" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" Oct 06 15:03:47 crc kubenswrapper[4888]: E1006 15:03:47.283297 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 06 15:03:47 crc kubenswrapper[4888]: E1006 15:03:47.283516 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhm7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5nfpw_openshift-marketplace(b8705687-8217-4b9f-bed8-a293b8a041b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 15:03:47 crc kubenswrapper[4888]: E1006 15:03:47.284865 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5nfpw" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" Oct 06 15:03:49 crc kubenswrapper[4888]: E1006 15:03:49.684354 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5nfpw" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" Oct 06 15:03:49 crc kubenswrapper[4888]: E1006 15:03:49.684227 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zt5hl" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" Oct 06 15:03:49 crc kubenswrapper[4888]: E1006 15:03:49.684468 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bsv89" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" Oct 06 15:03:50 crc kubenswrapper[4888]: E1006 15:03:50.023961 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 06 15:03:50 crc kubenswrapper[4888]: E1006 15:03:50.024476 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4zsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jmkzm_openshift-marketplace(98174955-9911-4a94-8149-a6572031b287): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 15:03:50 crc kubenswrapper[4888]: E1006 15:03:50.025873 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jmkzm" podUID="98174955-9911-4a94-8149-a6572031b287" Oct 06 15:03:50 crc kubenswrapper[4888]: W1006 15:03:50.209900 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-1479d353f0d6459b010810fa3ed23066b6a551ef03d6d730fd3bc2b7973ca606 WatchSource:0}: Error finding container 1479d353f0d6459b010810fa3ed23066b6a551ef03d6d730fd3bc2b7973ca606: Status 404 returned error can't find the container with id 1479d353f0d6459b010810fa3ed23066b6a551ef03d6d730fd3bc2b7973ca606 Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.273695 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hm59m"] Oct 06 15:03:50 crc kubenswrapper[4888]: W1006 15:03:50.339737 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-7ba24ab1dc804fb0fea070028a3bbf89dc66c5f3e90297f02c3dd0cbb75acc3d WatchSource:0}: Error finding container 7ba24ab1dc804fb0fea070028a3bbf89dc66c5f3e90297f02c3dd0cbb75acc3d: Status 404 returned error can't find the container with id 7ba24ab1dc804fb0fea070028a3bbf89dc66c5f3e90297f02c3dd0cbb75acc3d Oct 06 15:03:50 crc kubenswrapper[4888]: W1006 15:03:50.343827 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aee40f4_3a30_43cb_aa49_aabcf3c074b7.slice/crio-5341e01c7745147c44ff973f3f233979ae19efb59e4da683913d53ba1b08caab WatchSource:0}: Error finding container 5341e01c7745147c44ff973f3f233979ae19efb59e4da683913d53ba1b08caab: Status 404 returned error can't find the container with id 5341e01c7745147c44ff973f3f233979ae19efb59e4da683913d53ba1b08caab Oct 06 15:03:50 crc kubenswrapper[4888]: E1006 15:03:50.641170 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 06 15:03:50 crc kubenswrapper[4888]: E1006 15:03:50.641900 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fkzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-p7mdg_openshift-marketplace(9b9f33d4-46f4-47e3-a852-9dd264924080): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 15:03:50 crc kubenswrapper[4888]: E1006 15:03:50.644981 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-p7mdg" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.877285 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"587c895602bb7cd7c37701f48885c1c8fb2efe95d6b034279a8a9a8c93ccd04b"} Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.877332 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7ba24ab1dc804fb0fea070028a3bbf89dc66c5f3e90297f02c3dd0cbb75acc3d"} Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.878635 4888 generic.go:334] "Generic (PLEG): container finished" podID="dfe51362-c625-4856-adbd-6fa6f1380156" containerID="bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1" exitCode=0 Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.878673 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s5kj" event={"ID":"dfe51362-c625-4856-adbd-6fa6f1380156","Type":"ContainerDied","Data":"bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1"} Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.883455 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a7306988748ce15e813ee52f21ccc4a3b84d18a4abb9229f9461b4fb0fe27527"} Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.883495 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cf8d19c473e842ae6a6ef9a7e45833eceb3b351a830eb6e5b3a6351294cfa6e6"} Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.886184 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-4wdjj_1d6dbd47-5f20-486f-864c-7042f45c2ab4/cluster-samples-operator/0.log" Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.886253 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wdjj" event={"ID":"1d6dbd47-5f20-486f-864c-7042f45c2ab4","Type":"ContainerStarted","Data":"ca24479c0878c4bb77e462fb3667a309eb02b1f66b67855997f30855d9da5d10"} Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.887458 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hm59m" event={"ID":"2aee40f4-3a30-43cb-aa49-aabcf3c074b7","Type":"ContainerStarted","Data":"5341e01c7745147c44ff973f3f233979ae19efb59e4da683913d53ba1b08caab"} Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.889603 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f750ef5bc9dc664e91ebe64c917d3bbaa1e04346c893ac92022bf0f7e699e65e"} Oct 06 15:03:50 crc kubenswrapper[4888]: I1006 15:03:50.889674 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1479d353f0d6459b010810fa3ed23066b6a551ef03d6d730fd3bc2b7973ca606"} Oct 06 15:03:50 crc kubenswrapper[4888]: E1006 15:03:50.891045 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jmkzm" podUID="98174955-9911-4a94-8149-a6572031b287" Oct 06 15:03:50 crc kubenswrapper[4888]: E1006 15:03:50.891249 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-p7mdg" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" Oct 06 15:03:51 crc kubenswrapper[4888]: E1006 15:03:51.361772 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 06 15:03:51 crc kubenswrapper[4888]: E1006 15:03:51.362162 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqmq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-574kn_openshift-marketplace(ca490a5a-c686-44ac-b282-d260a32fbe71): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 15:03:51 crc kubenswrapper[4888]: E1006 15:03:51.363336 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-574kn" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" Oct 06 15:03:51 crc kubenswrapper[4888]: I1006 15:03:51.895598 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hm59m" event={"ID":"2aee40f4-3a30-43cb-aa49-aabcf3c074b7","Type":"ContainerStarted","Data":"39deb66734fd3e13c891704fa06e398ea7091d6e6354dc7240be70dd63137eef"} Oct 06 15:03:51 crc kubenswrapper[4888]: I1006 15:03:51.895998 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hm59m" event={"ID":"2aee40f4-3a30-43cb-aa49-aabcf3c074b7","Type":"ContainerStarted","Data":"cc34d98408d4dd22cbb7b20566fca9247350a4bb7ec2773b4b9a7d48c25076af"} Oct 06 15:03:51 crc kubenswrapper[4888]: E1006 15:03:51.897229 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-574kn" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" Oct 06 15:03:52 crc kubenswrapper[4888]: I1006 15:03:52.927906 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hm59m" podStartSLOduration=151.927885906 podStartE2EDuration="2m31.927885906s" podCreationTimestamp="2025-10-06 15:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:03:52.9271006 +0000 UTC m=+172.739451318" watchObservedRunningTime="2025-10-06 15:03:52.927885906 +0000 UTC m=+172.740236624" Oct 06 15:03:53 crc kubenswrapper[4888]: I1006 15:03:53.914255 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s5kj" event={"ID":"dfe51362-c625-4856-adbd-6fa6f1380156","Type":"ContainerStarted","Data":"7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0"} Oct 06 15:03:53 crc kubenswrapper[4888]: I1006 15:03:53.933143 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5s5kj" podStartSLOduration=5.052477692 podStartE2EDuration="48.933122119s" podCreationTimestamp="2025-10-06 15:03:05 +0000 UTC" firstStartedPulling="2025-10-06 15:03:08.99777379 +0000 UTC m=+128.810124508" lastFinishedPulling="2025-10-06 15:03:52.878418217 +0000 UTC m=+172.690768935" observedRunningTime="2025-10-06 15:03:53.928677602 +0000 UTC m=+173.741028340" watchObservedRunningTime="2025-10-06 15:03:53.933122119 +0000 UTC m=+173.745472837" Oct 06 15:03:55 crc kubenswrapper[4888]: I1006 15:03:55.849006 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:55 crc kubenswrapper[4888]: I1006 15:03:55.849263 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:56 crc kubenswrapper[4888]: I1006 15:03:56.040683 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:03:59 crc kubenswrapper[4888]: I1006 15:03:59.154779 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:03:59 crc kubenswrapper[4888]: I1006 15:03:59.891954 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7g2cw"] Oct 06 15:04:01 crc kubenswrapper[4888]: I1006 15:04:01.970314 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n94pp" event={"ID":"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9","Type":"ContainerStarted","Data":"e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf"} Oct 06 15:04:01 crc kubenswrapper[4888]: I1006 15:04:01.975455 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt5hl" event={"ID":"c81c9b40-801b-4b14-84c3-e684bbdae002","Type":"ContainerStarted","Data":"7346e2fd1cf4cc6b41a438e1dd390011f2549841de061a487ba7cdaa4255db82"} Oct 06 15:04:02 crc kubenswrapper[4888]: I1006 15:04:02.563728 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:04:02 crc kubenswrapper[4888]: I1006 15:04:02.563780 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:04:02 crc kubenswrapper[4888]: I1006 15:04:02.986820 4888 generic.go:334] "Generic (PLEG): container finished" podID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerID="7346e2fd1cf4cc6b41a438e1dd390011f2549841de061a487ba7cdaa4255db82" exitCode=0 Oct 06 15:04:02 crc kubenswrapper[4888]: I1006 15:04:02.986994 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt5hl" event={"ID":"c81c9b40-801b-4b14-84c3-e684bbdae002","Type":"ContainerDied","Data":"7346e2fd1cf4cc6b41a438e1dd390011f2549841de061a487ba7cdaa4255db82"} Oct 06 15:04:02 crc kubenswrapper[4888]: I1006 15:04:02.987173 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt5hl" event={"ID":"c81c9b40-801b-4b14-84c3-e684bbdae002","Type":"ContainerStarted","Data":"c0c0abe56452fdb6a9b6b39a7891d7907ac3a4594fbd16a077c8c64cad7f1ad4"} Oct 06 15:04:02 crc kubenswrapper[4888]: I1006 15:04:02.989990 4888 generic.go:334] "Generic (PLEG): container finished" podID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerID="e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf" exitCode=0 Oct 06 15:04:02 crc kubenswrapper[4888]: I1006 15:04:02.990032 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n94pp" event={"ID":"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9","Type":"ContainerDied","Data":"e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf"} Oct 06 15:04:03 crc kubenswrapper[4888]: I1006 15:04:03.014638 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zt5hl" podStartSLOduration=5.568948446 podStartE2EDuration="59.014618059s" podCreationTimestamp="2025-10-06 15:03:04 +0000 UTC" firstStartedPulling="2025-10-06 15:03:08.997928894 +0000 UTC m=+128.810279612" lastFinishedPulling="2025-10-06 15:04:02.443598507 +0000 UTC m=+182.255949225" observedRunningTime="2025-10-06 15:04:03.008456925 +0000 UTC m=+182.820807633" watchObservedRunningTime="2025-10-06 15:04:03.014618059 +0000 UTC m=+182.826968777" Oct 06 15:04:03 crc kubenswrapper[4888]: I1006 15:04:03.998131 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n94pp" event={"ID":"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9","Type":"ContainerStarted","Data":"80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d"} Oct 06 15:04:04 crc kubenswrapper[4888]: I1006 15:04:04.001376 4888 generic.go:334] "Generic (PLEG): container finished" podID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerID="bf020af5f4f8ab339c097bde3436ffa48e39ad2e49b728056c5f9efe2e701e69" exitCode=0 Oct 06 15:04:04 crc kubenswrapper[4888]: I1006 15:04:04.001458 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsv89" event={"ID":"43206429-01b9-4d6c-8f90-a2f02ca09a1d","Type":"ContainerDied","Data":"bf020af5f4f8ab339c097bde3436ffa48e39ad2e49b728056c5f9efe2e701e69"} Oct 06 15:04:04 crc kubenswrapper[4888]: I1006 15:04:04.004077 4888 generic.go:334] "Generic (PLEG): container finished" podID="98174955-9911-4a94-8149-a6572031b287" containerID="06cce4ef1b76cca84e033a31e087cef2d89d0d11ebed28013c7ad33ac1a7b487" exitCode=0 Oct 06 15:04:04 crc kubenswrapper[4888]: I1006 15:04:04.004145 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmkzm" event={"ID":"98174955-9911-4a94-8149-a6572031b287","Type":"ContainerDied","Data":"06cce4ef1b76cca84e033a31e087cef2d89d0d11ebed28013c7ad33ac1a7b487"} Oct 06 15:04:04 crc kubenswrapper[4888]: I1006 15:04:04.009383 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7mdg" event={"ID":"9b9f33d4-46f4-47e3-a852-9dd264924080","Type":"ContainerStarted","Data":"ce0b85cac813e0a3f5ce1c5daacc803e1ac1a3c4ce2c694f817a08d50166f477"} Oct 06 15:04:04 crc kubenswrapper[4888]: I1006 15:04:04.015390 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n94pp" podStartSLOduration=6.770100991 podStartE2EDuration="1m1.015369894s" podCreationTimestamp="2025-10-06 15:03:03 +0000 UTC" firstStartedPulling="2025-10-06 15:03:09.137042334 +0000 UTC m=+128.949393052" lastFinishedPulling="2025-10-06 15:04:03.382311237 +0000 UTC m=+183.194661955" observedRunningTime="2025-10-06 15:04:04.012288342 +0000 UTC m=+183.824639090" watchObservedRunningTime="2025-10-06 15:04:04.015369894 +0000 UTC m=+183.827720612" Oct 06 15:04:05 crc kubenswrapper[4888]: I1006 15:04:05.026817 4888 generic.go:334] "Generic (PLEG): container finished" podID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerID="ce0b85cac813e0a3f5ce1c5daacc803e1ac1a3c4ce2c694f817a08d50166f477" exitCode=0 Oct 06 15:04:05 crc kubenswrapper[4888]: I1006 15:04:05.026909 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7mdg" event={"ID":"9b9f33d4-46f4-47e3-a852-9dd264924080","Type":"ContainerDied","Data":"ce0b85cac813e0a3f5ce1c5daacc803e1ac1a3c4ce2c694f817a08d50166f477"} Oct 06 15:04:05 crc kubenswrapper[4888]: I1006 15:04:05.513327 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:04:05 crc kubenswrapper[4888]: I1006 15:04:05.513399 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:04:05 crc kubenswrapper[4888]: I1006 15:04:05.559881 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.009519 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.033480 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsv89" event={"ID":"43206429-01b9-4d6c-8f90-a2f02ca09a1d","Type":"ContainerStarted","Data":"ec77e96e4b16dd52deff8e759fa8aef00e108362f3813098410e77a68d68f3b0"} Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.035614 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmkzm" event={"ID":"98174955-9911-4a94-8149-a6572031b287","Type":"ContainerStarted","Data":"c3dcca1f134596d109b74194a24b89ebf04747265cf2ef6eca05f16bfa47f7c8"} Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.036831 4888 generic.go:334] "Generic (PLEG): container finished" podID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerID="bbd1ddd29bd0308c6b0259cbe8b07f10e619a2e95bd55d0faa73d7b24004f6fb" exitCode=0 Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.036883 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nfpw" event={"ID":"b8705687-8217-4b9f-bed8-a293b8a041b0","Type":"ContainerDied","Data":"bbd1ddd29bd0308c6b0259cbe8b07f10e619a2e95bd55d0faa73d7b24004f6fb"} Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.103450 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bsv89" podStartSLOduration=6.912922356 podStartE2EDuration="1m4.103434662s" podCreationTimestamp="2025-10-06 15:03:02 +0000 UTC" firstStartedPulling="2025-10-06 15:03:07.768169981 +0000 UTC m=+127.580520699" lastFinishedPulling="2025-10-06 15:04:04.958682287 +0000 UTC m=+184.771033005" observedRunningTime="2025-10-06 15:04:06.076751158 +0000 UTC m=+185.889101896" watchObservedRunningTime="2025-10-06 15:04:06.103434662 +0000 UTC m=+185.915785380" Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.105515 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jmkzm" podStartSLOduration=4.016012823 podStartE2EDuration="1m0.10550699s" podCreationTimestamp="2025-10-06 15:03:06 +0000 UTC" firstStartedPulling="2025-10-06 15:03:08.997617414 +0000 UTC m=+128.809968132" lastFinishedPulling="2025-10-06 15:04:05.087111581 +0000 UTC m=+184.899462299" observedRunningTime="2025-10-06 15:04:06.104473075 +0000 UTC m=+185.916823793" watchObservedRunningTime="2025-10-06 15:04:06.10550699 +0000 UTC m=+185.917857708" Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.716345 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:04:06 crc kubenswrapper[4888]: I1006 15:04:06.716419 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:04:07 crc kubenswrapper[4888]: I1006 15:04:07.042709 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-574kn" event={"ID":"ca490a5a-c686-44ac-b282-d260a32fbe71","Type":"ContainerStarted","Data":"715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803"} Oct 06 15:04:07 crc kubenswrapper[4888]: I1006 15:04:07.047299 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nfpw" event={"ID":"b8705687-8217-4b9f-bed8-a293b8a041b0","Type":"ContainerStarted","Data":"f2028032daa938eb56f13c0ef9ba06fa222e2045ef5cbceb3a864784b0c84735"} Oct 06 15:04:07 crc kubenswrapper[4888]: I1006 15:04:07.050128 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7mdg" event={"ID":"9b9f33d4-46f4-47e3-a852-9dd264924080","Type":"ContainerStarted","Data":"c5cf62f84013a375421645c57bf66bce3008ce95f52128cfb70795b3a3df6280"} Oct 06 15:04:07 crc kubenswrapper[4888]: I1006 15:04:07.092145 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7mdg" podStartSLOduration=6.284773425 podStartE2EDuration="1m2.092123466s" podCreationTimestamp="2025-10-06 15:03:05 +0000 UTC" firstStartedPulling="2025-10-06 15:03:10.118151077 +0000 UTC m=+129.930501795" lastFinishedPulling="2025-10-06 15:04:05.925501118 +0000 UTC m=+185.737851836" observedRunningTime="2025-10-06 15:04:07.092117326 +0000 UTC m=+186.904468054" watchObservedRunningTime="2025-10-06 15:04:07.092123466 +0000 UTC m=+186.904474184" Oct 06 15:04:07 crc kubenswrapper[4888]: I1006 15:04:07.758596 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jmkzm" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="registry-server" probeResult="failure" output=< Oct 06 15:04:07 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:04:07 crc kubenswrapper[4888]: > Oct 06 15:04:08 crc kubenswrapper[4888]: I1006 15:04:08.057477 4888 generic.go:334] "Generic (PLEG): container finished" podID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerID="715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803" exitCode=0 Oct 06 15:04:08 crc kubenswrapper[4888]: I1006 15:04:08.057515 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-574kn" event={"ID":"ca490a5a-c686-44ac-b282-d260a32fbe71","Type":"ContainerDied","Data":"715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803"} Oct 06 15:04:08 crc kubenswrapper[4888]: I1006 15:04:08.083210 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5nfpw" podStartSLOduration=8.598262223 podStartE2EDuration="1m6.08317834s" podCreationTimestamp="2025-10-06 15:03:02 +0000 UTC" firstStartedPulling="2025-10-06 15:03:08.998727487 +0000 UTC m=+128.811078205" lastFinishedPulling="2025-10-06 15:04:06.483643604 +0000 UTC m=+186.295994322" observedRunningTime="2025-10-06 15:04:07.118915613 +0000 UTC m=+186.931266351" watchObservedRunningTime="2025-10-06 15:04:08.08317834 +0000 UTC m=+187.895542329" Oct 06 15:04:09 crc kubenswrapper[4888]: I1006 15:04:09.958724 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s5kj"] Oct 06 15:04:09 crc kubenswrapper[4888]: I1006 15:04:09.959243 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5s5kj" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" containerName="registry-server" containerID="cri-o://7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0" gracePeriod=2 Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.075934 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-574kn" event={"ID":"ca490a5a-c686-44ac-b282-d260a32fbe71","Type":"ContainerStarted","Data":"7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b"} Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.094990 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-574kn" podStartSLOduration=7.071612983 podStartE2EDuration="1m7.094975851s" podCreationTimestamp="2025-10-06 15:03:03 +0000 UTC" firstStartedPulling="2025-10-06 15:03:08.998210022 +0000 UTC m=+128.810560740" lastFinishedPulling="2025-10-06 15:04:09.0215729 +0000 UTC m=+188.833923608" observedRunningTime="2025-10-06 15:04:10.093022837 +0000 UTC m=+189.905373565" watchObservedRunningTime="2025-10-06 15:04:10.094975851 +0000 UTC m=+189.907326559" Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.627622 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.726235 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-catalog-content\") pod \"dfe51362-c625-4856-adbd-6fa6f1380156\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.726301 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbfpf\" (UniqueName: \"kubernetes.io/projected/dfe51362-c625-4856-adbd-6fa6f1380156-kube-api-access-lbfpf\") pod \"dfe51362-c625-4856-adbd-6fa6f1380156\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.726338 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-utilities\") pod \"dfe51362-c625-4856-adbd-6fa6f1380156\" (UID: \"dfe51362-c625-4856-adbd-6fa6f1380156\") " Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.727150 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-utilities" (OuterVolumeSpecName: "utilities") pod "dfe51362-c625-4856-adbd-6fa6f1380156" (UID: "dfe51362-c625-4856-adbd-6fa6f1380156"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.734950 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe51362-c625-4856-adbd-6fa6f1380156-kube-api-access-lbfpf" (OuterVolumeSpecName: "kube-api-access-lbfpf") pod "dfe51362-c625-4856-adbd-6fa6f1380156" (UID: "dfe51362-c625-4856-adbd-6fa6f1380156"). InnerVolumeSpecName "kube-api-access-lbfpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.743288 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfe51362-c625-4856-adbd-6fa6f1380156" (UID: "dfe51362-c625-4856-adbd-6fa6f1380156"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.827625 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.827664 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbfpf\" (UniqueName: \"kubernetes.io/projected/dfe51362-c625-4856-adbd-6fa6f1380156-kube-api-access-lbfpf\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:10 crc kubenswrapper[4888]: I1006 15:04:10.827678 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfe51362-c625-4856-adbd-6fa6f1380156-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.084670 4888 generic.go:334] "Generic (PLEG): container finished" podID="dfe51362-c625-4856-adbd-6fa6f1380156" containerID="7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0" exitCode=0 Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.084718 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s5kj" event={"ID":"dfe51362-c625-4856-adbd-6fa6f1380156","Type":"ContainerDied","Data":"7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0"} Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.084747 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5s5kj" event={"ID":"dfe51362-c625-4856-adbd-6fa6f1380156","Type":"ContainerDied","Data":"81632351b3f971b4dd94c05eb4e2f2b280556223b8d67ab28a72fbb6917371f4"} Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.084770 4888 scope.go:117] "RemoveContainer" containerID="7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.084925 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5s5kj" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.104482 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s5kj"] Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.109597 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5s5kj"] Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.126487 4888 scope.go:117] "RemoveContainer" containerID="bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.195864 4888 scope.go:117] "RemoveContainer" containerID="cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.214038 4888 scope.go:117] "RemoveContainer" containerID="7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0" Oct 06 15:04:11 crc kubenswrapper[4888]: E1006 15:04:11.215231 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0\": container with ID starting with 7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0 not found: ID does not exist" containerID="7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.215262 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0"} err="failed to get container status \"7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0\": rpc error: code = NotFound desc = could not find container \"7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0\": container with ID starting with 7f7ac4b0613a3c9184a83700afcb0c9d458b017099ee1cc9abfafb87fde39ef0 not found: ID does not exist" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.215300 4888 scope.go:117] "RemoveContainer" containerID="bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1" Oct 06 15:04:11 crc kubenswrapper[4888]: E1006 15:04:11.217734 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1\": container with ID starting with bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1 not found: ID does not exist" containerID="bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.217779 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1"} err="failed to get container status \"bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1\": rpc error: code = NotFound desc = could not find container \"bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1\": container with ID starting with bb46832883d792c608ca9ca28dc4c0e822a72363cf71e12f60d161dff9ac31e1 not found: ID does not exist" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.217824 4888 scope.go:117] "RemoveContainer" containerID="cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2" Oct 06 15:04:11 crc kubenswrapper[4888]: E1006 15:04:11.218181 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2\": container with ID starting with cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2 not found: ID does not exist" containerID="cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2" Oct 06 15:04:11 crc kubenswrapper[4888]: I1006 15:04:11.218210 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2"} err="failed to get container status \"cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2\": rpc error: code = NotFound desc = could not find container \"cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2\": container with ID starting with cb78e31807d041e3dda87f119d2860d95993df5960c48c989d1cb8b04f5270e2 not found: ID does not exist" Oct 06 15:04:12 crc kubenswrapper[4888]: I1006 15:04:12.937105 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" path="/var/lib/kubelet/pods/dfe51362-c625-4856-adbd-6fa6f1380156/volumes" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.027204 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.027468 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.065688 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.129690 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.395535 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.395587 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.436509 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.484600 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.484636 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.525042 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.633112 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.633163 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:04:13 crc kubenswrapper[4888]: I1006 15:04:13.679637 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:04:14 crc kubenswrapper[4888]: I1006 15:04:14.144062 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:04:14 crc kubenswrapper[4888]: I1006 15:04:14.147582 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:04:14 crc kubenswrapper[4888]: I1006 15:04:14.147730 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:04:15 crc kubenswrapper[4888]: I1006 15:04:15.563090 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.359897 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n94pp"] Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.360176 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n94pp" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerName="registry-server" containerID="cri-o://80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d" gracePeriod=2 Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.557684 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-574kn"] Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.558164 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-574kn" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerName="registry-server" containerID="cri-o://7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b" gracePeriod=2 Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.646035 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.646078 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.692476 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.769816 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.790392 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.830304 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.894936 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74bc2\" (UniqueName: \"kubernetes.io/projected/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-kube-api-access-74bc2\") pod \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.895027 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-catalog-content\") pod \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.895074 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-utilities\") pod \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\" (UID: \"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9\") " Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.895820 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-utilities" (OuterVolumeSpecName: "utilities") pod "b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" (UID: "b5e23ea0-bdeb-4f81-882f-fce7b54b73d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.902817 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-kube-api-access-74bc2" (OuterVolumeSpecName: "kube-api-access-74bc2") pod "b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" (UID: "b5e23ea0-bdeb-4f81-882f-fce7b54b73d9"). InnerVolumeSpecName "kube-api-access-74bc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.910333 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.946002 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" (UID: "b5e23ea0-bdeb-4f81-882f-fce7b54b73d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.996237 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-utilities\") pod \"ca490a5a-c686-44ac-b282-d260a32fbe71\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.996290 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-catalog-content\") pod \"ca490a5a-c686-44ac-b282-d260a32fbe71\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.996395 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqmq8\" (UniqueName: \"kubernetes.io/projected/ca490a5a-c686-44ac-b282-d260a32fbe71-kube-api-access-nqmq8\") pod \"ca490a5a-c686-44ac-b282-d260a32fbe71\" (UID: \"ca490a5a-c686-44ac-b282-d260a32fbe71\") " Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.996667 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.996685 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.996699 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74bc2\" (UniqueName: \"kubernetes.io/projected/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9-kube-api-access-74bc2\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:16 crc kubenswrapper[4888]: I1006 15:04:16.998146 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-utilities" (OuterVolumeSpecName: "utilities") pod "ca490a5a-c686-44ac-b282-d260a32fbe71" (UID: "ca490a5a-c686-44ac-b282-d260a32fbe71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.005095 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca490a5a-c686-44ac-b282-d260a32fbe71-kube-api-access-nqmq8" (OuterVolumeSpecName: "kube-api-access-nqmq8") pod "ca490a5a-c686-44ac-b282-d260a32fbe71" (UID: "ca490a5a-c686-44ac-b282-d260a32fbe71"). InnerVolumeSpecName "kube-api-access-nqmq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.047444 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca490a5a-c686-44ac-b282-d260a32fbe71" (UID: "ca490a5a-c686-44ac-b282-d260a32fbe71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.097860 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.097905 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca490a5a-c686-44ac-b282-d260a32fbe71-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.097919 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqmq8\" (UniqueName: \"kubernetes.io/projected/ca490a5a-c686-44ac-b282-d260a32fbe71-kube-api-access-nqmq8\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.114483 4888 generic.go:334] "Generic (PLEG): container finished" podID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerID="7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b" exitCode=0 Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.114536 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-574kn" event={"ID":"ca490a5a-c686-44ac-b282-d260a32fbe71","Type":"ContainerDied","Data":"7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b"} Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.114561 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-574kn" event={"ID":"ca490a5a-c686-44ac-b282-d260a32fbe71","Type":"ContainerDied","Data":"d48b0499b29a31fc98232b7352767ab302a23079454df272db496cea1a2bbb13"} Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.114576 4888 scope.go:117] "RemoveContainer" containerID="7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.114689 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-574kn" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.120138 4888 generic.go:334] "Generic (PLEG): container finished" podID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerID="80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d" exitCode=0 Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.120374 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n94pp" event={"ID":"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9","Type":"ContainerDied","Data":"80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d"} Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.120475 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n94pp" event={"ID":"b5e23ea0-bdeb-4f81-882f-fce7b54b73d9","Type":"ContainerDied","Data":"56ba9d91dd4e023862be6d4df7334ee9ec9b1d7e6f17c1d384678a20f0e74b55"} Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.120593 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n94pp" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.139639 4888 scope.go:117] "RemoveContainer" containerID="715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.147267 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n94pp"] Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.153219 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n94pp"] Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.157701 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-574kn"] Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.166460 4888 scope.go:117] "RemoveContainer" containerID="409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.167733 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-574kn"] Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.168771 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.182568 4888 scope.go:117] "RemoveContainer" containerID="7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b" Oct 06 15:04:17 crc kubenswrapper[4888]: E1006 15:04:17.182965 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b\": container with ID starting with 7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b not found: ID does not exist" containerID="7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.183001 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b"} err="failed to get container status \"7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b\": rpc error: code = NotFound desc = could not find container \"7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b\": container with ID starting with 7fa6e9c3ff4900e0fa1d7a311576044215e609edec454226285cc2c360065d4b not found: ID does not exist" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.183027 4888 scope.go:117] "RemoveContainer" containerID="715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803" Oct 06 15:04:17 crc kubenswrapper[4888]: E1006 15:04:17.183324 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803\": container with ID starting with 715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803 not found: ID does not exist" containerID="715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.183370 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803"} err="failed to get container status \"715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803\": rpc error: code = NotFound desc = could not find container \"715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803\": container with ID starting with 715aa054b5cba198ae64ce94282d513207a2755c426573de489d43a6a2a69803 not found: ID does not exist" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.183402 4888 scope.go:117] "RemoveContainer" containerID="409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578" Oct 06 15:04:17 crc kubenswrapper[4888]: E1006 15:04:17.183709 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578\": container with ID starting with 409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578 not found: ID does not exist" containerID="409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.183739 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578"} err="failed to get container status \"409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578\": rpc error: code = NotFound desc = could not find container \"409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578\": container with ID starting with 409592badf5bdeab40ba11f2b5eb93a0ce03ae3137254363fc46edc0a08b3578 not found: ID does not exist" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.183758 4888 scope.go:117] "RemoveContainer" containerID="80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.200841 4888 scope.go:117] "RemoveContainer" containerID="e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.215172 4888 scope.go:117] "RemoveContainer" containerID="e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.228881 4888 scope.go:117] "RemoveContainer" containerID="80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d" Oct 06 15:04:17 crc kubenswrapper[4888]: E1006 15:04:17.229409 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d\": container with ID starting with 80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d not found: ID does not exist" containerID="80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.229474 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d"} err="failed to get container status \"80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d\": rpc error: code = NotFound desc = could not find container \"80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d\": container with ID starting with 80fe47bac2b0b0c604ad79404fd965ca0222430671587db20f7da422af120f3d not found: ID does not exist" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.229528 4888 scope.go:117] "RemoveContainer" containerID="e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf" Oct 06 15:04:17 crc kubenswrapper[4888]: E1006 15:04:17.230202 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf\": container with ID starting with e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf not found: ID does not exist" containerID="e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.230239 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf"} err="failed to get container status \"e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf\": rpc error: code = NotFound desc = could not find container \"e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf\": container with ID starting with e28a9ef79d6790663d561f4e6b4f639cb5f4d2f958804b92c10958fd44fe91bf not found: ID does not exist" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.230268 4888 scope.go:117] "RemoveContainer" containerID="e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101" Oct 06 15:04:17 crc kubenswrapper[4888]: E1006 15:04:17.230608 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101\": container with ID starting with e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101 not found: ID does not exist" containerID="e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101" Oct 06 15:04:17 crc kubenswrapper[4888]: I1006 15:04:17.230631 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101"} err="failed to get container status \"e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101\": rpc error: code = NotFound desc = could not find container \"e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101\": container with ID starting with e30b0500f911c0e883c9160b1e5e377408146d03dd5d14ff9f65f61da15b3101 not found: ID does not exist" Oct 06 15:04:18 crc kubenswrapper[4888]: I1006 15:04:18.927200 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" path="/var/lib/kubelet/pods/b5e23ea0-bdeb-4f81-882f-fce7b54b73d9/volumes" Oct 06 15:04:18 crc kubenswrapper[4888]: I1006 15:04:18.928225 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" path="/var/lib/kubelet/pods/ca490a5a-c686-44ac-b282-d260a32fbe71/volumes" Oct 06 15:04:20 crc kubenswrapper[4888]: I1006 15:04:20.955199 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmkzm"] Oct 06 15:04:20 crc kubenswrapper[4888]: I1006 15:04:20.955472 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jmkzm" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="registry-server" containerID="cri-o://c3dcca1f134596d109b74194a24b89ebf04747265cf2ef6eca05f16bfa47f7c8" gracePeriod=2 Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.143539 4888 generic.go:334] "Generic (PLEG): container finished" podID="98174955-9911-4a94-8149-a6572031b287" containerID="c3dcca1f134596d109b74194a24b89ebf04747265cf2ef6eca05f16bfa47f7c8" exitCode=0 Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.143734 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmkzm" event={"ID":"98174955-9911-4a94-8149-a6572031b287","Type":"ContainerDied","Data":"c3dcca1f134596d109b74194a24b89ebf04747265cf2ef6eca05f16bfa47f7c8"} Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.305749 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.449636 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-utilities\") pod \"98174955-9911-4a94-8149-a6572031b287\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.449711 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-catalog-content\") pod \"98174955-9911-4a94-8149-a6572031b287\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.449789 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4zsz\" (UniqueName: \"kubernetes.io/projected/98174955-9911-4a94-8149-a6572031b287-kube-api-access-c4zsz\") pod \"98174955-9911-4a94-8149-a6572031b287\" (UID: \"98174955-9911-4a94-8149-a6572031b287\") " Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.450618 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-utilities" (OuterVolumeSpecName: "utilities") pod "98174955-9911-4a94-8149-a6572031b287" (UID: "98174955-9911-4a94-8149-a6572031b287"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.455069 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98174955-9911-4a94-8149-a6572031b287-kube-api-access-c4zsz" (OuterVolumeSpecName: "kube-api-access-c4zsz") pod "98174955-9911-4a94-8149-a6572031b287" (UID: "98174955-9911-4a94-8149-a6572031b287"). InnerVolumeSpecName "kube-api-access-c4zsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.551280 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4zsz\" (UniqueName: \"kubernetes.io/projected/98174955-9911-4a94-8149-a6572031b287-kube-api-access-c4zsz\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.551316 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.560844 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98174955-9911-4a94-8149-a6572031b287" (UID: "98174955-9911-4a94-8149-a6572031b287"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:04:21 crc kubenswrapper[4888]: I1006 15:04:21.653033 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98174955-9911-4a94-8149-a6572031b287-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:22 crc kubenswrapper[4888]: I1006 15:04:22.165150 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmkzm" event={"ID":"98174955-9911-4a94-8149-a6572031b287","Type":"ContainerDied","Data":"bcf179e9e87cc1cd6d83d50470f9a7dc595aba25b8996cb2907e6dfeabb04c11"} Oct 06 15:04:22 crc kubenswrapper[4888]: I1006 15:04:22.165238 4888 scope.go:117] "RemoveContainer" containerID="c3dcca1f134596d109b74194a24b89ebf04747265cf2ef6eca05f16bfa47f7c8" Oct 06 15:04:22 crc kubenswrapper[4888]: I1006 15:04:22.165438 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmkzm" Oct 06 15:04:22 crc kubenswrapper[4888]: I1006 15:04:22.186278 4888 scope.go:117] "RemoveContainer" containerID="06cce4ef1b76cca84e033a31e087cef2d89d0d11ebed28013c7ad33ac1a7b487" Oct 06 15:04:22 crc kubenswrapper[4888]: I1006 15:04:22.199216 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmkzm"] Oct 06 15:04:22 crc kubenswrapper[4888]: I1006 15:04:22.206221 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jmkzm"] Oct 06 15:04:22 crc kubenswrapper[4888]: I1006 15:04:22.210684 4888 scope.go:117] "RemoveContainer" containerID="f9f1a3f94b4ee62b7b4cc4abc9bc2d7668b4ce7bf7ce7554329dc577cb9c801d" Oct 06 15:04:22 crc kubenswrapper[4888]: I1006 15:04:22.927035 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98174955-9911-4a94-8149-a6572031b287" path="/var/lib/kubelet/pods/98174955-9911-4a94-8149-a6572031b287/volumes" Oct 06 15:04:24 crc kubenswrapper[4888]: I1006 15:04:24.953783 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" podUID="6093f83d-6829-4712-91d0-eeed9f69d78d" containerName="oauth-openshift" containerID="cri-o://8a645f738b4680814001eaf6f42c6d439d3719436c0f9d06963f2b6b946c2d6f" gracePeriod=15 Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.185128 4888 generic.go:334] "Generic (PLEG): container finished" podID="6093f83d-6829-4712-91d0-eeed9f69d78d" containerID="8a645f738b4680814001eaf6f42c6d439d3719436c0f9d06963f2b6b946c2d6f" exitCode=0 Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.185170 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" event={"ID":"6093f83d-6829-4712-91d0-eeed9f69d78d","Type":"ContainerDied","Data":"8a645f738b4680814001eaf6f42c6d439d3719436c0f9d06963f2b6b946c2d6f"} Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.330027 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.510565 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-login\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.510977 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-serving-cert\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511078 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-idp-0-file-data\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511159 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-error\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511256 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-session\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511346 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-router-certs\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511420 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-ocp-branding-template\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511502 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-cliconfig\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511595 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-policies\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511676 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-service-ca\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511783 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-provider-selection\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.511905 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-dir\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.512037 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs6jx\" (UniqueName: \"kubernetes.io/projected/6093f83d-6829-4712-91d0-eeed9f69d78d-kube-api-access-rs6jx\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.512151 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-trusted-ca-bundle\") pod \"6093f83d-6829-4712-91d0-eeed9f69d78d\" (UID: \"6093f83d-6829-4712-91d0-eeed9f69d78d\") " Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.512631 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.512941 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.513407 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.513932 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.519607 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.519831 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.520294 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.520382 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.519667 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.522633 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.523581 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.525555 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.525899 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6093f83d-6829-4712-91d0-eeed9f69d78d-kube-api-access-rs6jx" (OuterVolumeSpecName: "kube-api-access-rs6jx") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "kube-api-access-rs6jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.526045 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6093f83d-6829-4712-91d0-eeed9f69d78d" (UID: "6093f83d-6829-4712-91d0-eeed9f69d78d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613396 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613435 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613449 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613461 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613473 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613487 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613502 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613514 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613561 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613575 4888 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613587 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613601 4888 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6093f83d-6829-4712-91d0-eeed9f69d78d-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613613 4888 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6093f83d-6829-4712-91d0-eeed9f69d78d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:25 crc kubenswrapper[4888]: I1006 15:04:25.613628 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs6jx\" (UniqueName: \"kubernetes.io/projected/6093f83d-6829-4712-91d0-eeed9f69d78d-kube-api-access-rs6jx\") on node \"crc\" DevicePath \"\"" Oct 06 15:04:26 crc kubenswrapper[4888]: I1006 15:04:26.191046 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" event={"ID":"6093f83d-6829-4712-91d0-eeed9f69d78d","Type":"ContainerDied","Data":"ef42b5aa35e5efca181900c681adf1fd6442d703cd420b6e71444e8cbbd623d2"} Oct 06 15:04:26 crc kubenswrapper[4888]: I1006 15:04:26.191267 4888 scope.go:117] "RemoveContainer" containerID="8a645f738b4680814001eaf6f42c6d439d3719436c0f9d06963f2b6b946c2d6f" Oct 06 15:04:26 crc kubenswrapper[4888]: I1006 15:04:26.191369 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7g2cw" Oct 06 15:04:26 crc kubenswrapper[4888]: I1006 15:04:26.242066 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7g2cw"] Oct 06 15:04:26 crc kubenswrapper[4888]: I1006 15:04:26.245287 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7g2cw"] Oct 06 15:04:26 crc kubenswrapper[4888]: I1006 15:04:26.932105 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6093f83d-6829-4712-91d0-eeed9f69d78d" path="/var/lib/kubelet/pods/6093f83d-6829-4712-91d0-eeed9f69d78d/volumes" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.973428 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b8b86856-9btq8"] Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.973988 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" containerName="extract-utilities" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974008 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" containerName="extract-utilities" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974024 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974034 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974050 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6093f83d-6829-4712-91d0-eeed9f69d78d" containerName="oauth-openshift" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974060 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6093f83d-6829-4712-91d0-eeed9f69d78d" containerName="oauth-openshift" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974080 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="extract-content" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974091 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="extract-content" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974111 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerName="extract-content" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974122 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerName="extract-content" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974138 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerName="extract-utilities" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974148 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerName="extract-utilities" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974165 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975300d5-f80e-44ba-b4b2-0c322b16d8b5" containerName="pruner" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974176 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="975300d5-f80e-44ba-b4b2-0c322b16d8b5" containerName="pruner" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974190 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerName="extract-content" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974200 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerName="extract-content" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974215 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerName="extract-utilities" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974225 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerName="extract-utilities" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974242 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" containerName="extract-content" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974252 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" containerName="extract-content" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974262 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974272 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974288 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974299 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974313 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="extract-utilities" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974323 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="extract-utilities" Oct 06 15:04:28 crc kubenswrapper[4888]: E1006 15:04:28.974338 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974349 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974520 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="98174955-9911-4a94-8149-a6572031b287" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974543 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe51362-c625-4856-adbd-6fa6f1380156" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974555 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="975300d5-f80e-44ba-b4b2-0c322b16d8b5" containerName="pruner" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974572 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e23ea0-bdeb-4f81-882f-fce7b54b73d9" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974592 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6093f83d-6829-4712-91d0-eeed9f69d78d" containerName="oauth-openshift" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.974605 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca490a5a-c686-44ac-b282-d260a32fbe71" containerName="registry-server" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.975227 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.979836 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.980290 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.980685 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.981014 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.982596 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.982861 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.984058 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.984471 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.984622 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.984764 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.986398 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b8b86856-9btq8"] Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.986530 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.987032 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.993709 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 06 15:04:28 crc kubenswrapper[4888]: I1006 15:04:28.999355 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.003877 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.158420 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159314 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159339 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159390 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a64e24b1-e1a5-4264-9784-1fb67727b747-audit-dir\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159409 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159432 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159475 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159506 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7274f\" (UniqueName: \"kubernetes.io/projected/a64e24b1-e1a5-4264-9784-1fb67727b747-kube-api-access-7274f\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159558 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-session\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159619 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-audit-policies\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159677 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159711 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159762 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-error\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159850 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-login\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.159881 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.260883 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-login\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.260931 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.260961 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.260977 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261000 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a64e24b1-e1a5-4264-9784-1fb67727b747-audit-dir\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261017 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261034 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261053 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261072 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7274f\" (UniqueName: \"kubernetes.io/projected/a64e24b1-e1a5-4264-9784-1fb67727b747-kube-api-access-7274f\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261103 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-session\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261123 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-audit-policies\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261146 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261165 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261188 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-error\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.261993 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a64e24b1-e1a5-4264-9784-1fb67727b747-audit-dir\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.262611 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.262614 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.263258 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.267539 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.268178 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.268179 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.272405 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a64e24b1-e1a5-4264-9784-1fb67727b747-audit-policies\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.277212 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.277580 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.277891 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-login\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.278640 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-user-template-error\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.279333 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a64e24b1-e1a5-4264-9784-1fb67727b747-v4-0-config-system-session\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.286921 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7274f\" (UniqueName: \"kubernetes.io/projected/a64e24b1-e1a5-4264-9784-1fb67727b747-kube-api-access-7274f\") pod \"oauth-openshift-6b8b86856-9btq8\" (UID: \"a64e24b1-e1a5-4264-9784-1fb67727b747\") " pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.303584 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:29 crc kubenswrapper[4888]: I1006 15:04:29.674397 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b8b86856-9btq8"] Oct 06 15:04:30 crc kubenswrapper[4888]: I1006 15:04:30.218696 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" event={"ID":"a64e24b1-e1a5-4264-9784-1fb67727b747","Type":"ContainerStarted","Data":"a3af9ab4c4c505b8c26cb1eb7117fd0e85c94a6f140426ab4335255f175dc391"} Oct 06 15:04:30 crc kubenswrapper[4888]: I1006 15:04:30.219046 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:30 crc kubenswrapper[4888]: I1006 15:04:30.219065 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" event={"ID":"a64e24b1-e1a5-4264-9784-1fb67727b747","Type":"ContainerStarted","Data":"abed9e97b7ced0f2b41f03c7432c2a0d7209fa7dab975396b0f25eb31c8d59bd"} Oct 06 15:04:30 crc kubenswrapper[4888]: I1006 15:04:30.240735 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" podStartSLOduration=31.240718019 podStartE2EDuration="31.240718019s" podCreationTimestamp="2025-10-06 15:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:04:30.238545473 +0000 UTC m=+210.050896191" watchObservedRunningTime="2025-10-06 15:04:30.240718019 +0000 UTC m=+210.053068737" Oct 06 15:04:30 crc kubenswrapper[4888]: I1006 15:04:30.390403 4888 patch_prober.go:28] interesting pod/oauth-openshift-6b8b86856-9btq8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 06 15:04:30 crc kubenswrapper[4888]: [+]log ok Oct 06 15:04:30 crc kubenswrapper[4888]: [-]poststarthook/max-in-flight-filter failed: reason withheld Oct 06 15:04:30 crc kubenswrapper[4888]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 06 15:04:30 crc kubenswrapper[4888]: [+]poststarthook/openshift.io-StartUserInformer ok Oct 06 15:04:30 crc kubenswrapper[4888]: healthz check failed Oct 06 15:04:30 crc kubenswrapper[4888]: I1006 15:04:30.390470 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" podUID="a64e24b1-e1a5-4264-9784-1fb67727b747" containerName="oauth-openshift" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:04:31 crc kubenswrapper[4888]: I1006 15:04:31.228221 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b8b86856-9btq8" Oct 06 15:04:32 crc kubenswrapper[4888]: I1006 15:04:32.563312 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:04:32 crc kubenswrapper[4888]: I1006 15:04:32.563660 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:04:32 crc kubenswrapper[4888]: I1006 15:04:32.563714 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:04:32 crc kubenswrapper[4888]: I1006 15:04:32.564526 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:04:32 crc kubenswrapper[4888]: I1006 15:04:32.564609 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a" gracePeriod=600 Oct 06 15:04:33 crc kubenswrapper[4888]: I1006 15:04:33.235504 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a" exitCode=0 Oct 06 15:04:33 crc kubenswrapper[4888]: I1006 15:04:33.235542 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a"} Oct 06 15:04:33 crc kubenswrapper[4888]: I1006 15:04:33.235886 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"0fc88084e8dfb0c728018c0d641727f0db19b908b971796df23c7ddf2d6bca30"} Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.167639 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsv89"] Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.168373 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bsv89" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerName="registry-server" containerID="cri-o://ec77e96e4b16dd52deff8e759fa8aef00e108362f3813098410e77a68d68f3b0" gracePeriod=30 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.181950 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nfpw"] Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.182277 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5nfpw" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerName="registry-server" containerID="cri-o://f2028032daa938eb56f13c0ef9ba06fa222e2045ef5cbceb3a864784b0c84735" gracePeriod=30 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.193696 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p26ff"] Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.194078 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" containerID="cri-o://9047af1b88f2929e7e6ef0ed9e258db71888d8e6e8ba78dab50b616d3631b262" gracePeriod=30 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.208547 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt5hl"] Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.208822 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zt5hl" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerName="registry-server" containerID="cri-o://c0c0abe56452fdb6a9b6b39a7891d7907ac3a4594fbd16a077c8c64cad7f1ad4" gracePeriod=30 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.225471 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fsj8b"] Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.226155 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.229734 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7mdg"] Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.230031 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7mdg" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerName="registry-server" containerID="cri-o://c5cf62f84013a375421645c57bf66bce3008ce95f52128cfb70795b3a3df6280" gracePeriod=30 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.255132 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fsj8b"] Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.351663 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw8rp\" (UniqueName: \"kubernetes.io/projected/7c70fbc7-53ed-416b-a3f3-754b465569d7-kube-api-access-nw8rp\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.352037 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c70fbc7-53ed-416b-a3f3-754b465569d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.352365 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7c70fbc7-53ed-416b-a3f3-754b465569d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.410249 4888 generic.go:334] "Generic (PLEG): container finished" podID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerID="c0c0abe56452fdb6a9b6b39a7891d7907ac3a4594fbd16a077c8c64cad7f1ad4" exitCode=0 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.410302 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt5hl" event={"ID":"c81c9b40-801b-4b14-84c3-e684bbdae002","Type":"ContainerDied","Data":"c0c0abe56452fdb6a9b6b39a7891d7907ac3a4594fbd16a077c8c64cad7f1ad4"} Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.431652 4888 generic.go:334] "Generic (PLEG): container finished" podID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerID="c5cf62f84013a375421645c57bf66bce3008ce95f52128cfb70795b3a3df6280" exitCode=0 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.431704 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7mdg" event={"ID":"9b9f33d4-46f4-47e3-a852-9dd264924080","Type":"ContainerDied","Data":"c5cf62f84013a375421645c57bf66bce3008ce95f52128cfb70795b3a3df6280"} Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.433809 4888 generic.go:334] "Generic (PLEG): container finished" podID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerID="9047af1b88f2929e7e6ef0ed9e258db71888d8e6e8ba78dab50b616d3631b262" exitCode=0 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.433834 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" event={"ID":"821ef1e1-2128-4c28-9030-8faacb7d5fb7","Type":"ContainerDied","Data":"9047af1b88f2929e7e6ef0ed9e258db71888d8e6e8ba78dab50b616d3631b262"} Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.441837 4888 generic.go:334] "Generic (PLEG): container finished" podID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerID="f2028032daa938eb56f13c0ef9ba06fa222e2045ef5cbceb3a864784b0c84735" exitCode=0 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.441891 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nfpw" event={"ID":"b8705687-8217-4b9f-bed8-a293b8a041b0","Type":"ContainerDied","Data":"f2028032daa938eb56f13c0ef9ba06fa222e2045ef5cbceb3a864784b0c84735"} Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.453521 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7c70fbc7-53ed-416b-a3f3-754b465569d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.453619 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw8rp\" (UniqueName: \"kubernetes.io/projected/7c70fbc7-53ed-416b-a3f3-754b465569d7-kube-api-access-nw8rp\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.453647 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c70fbc7-53ed-416b-a3f3-754b465569d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.455006 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c70fbc7-53ed-416b-a3f3-754b465569d7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.455928 4888 generic.go:334] "Generic (PLEG): container finished" podID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerID="ec77e96e4b16dd52deff8e759fa8aef00e108362f3813098410e77a68d68f3b0" exitCode=0 Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.455960 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsv89" event={"ID":"43206429-01b9-4d6c-8f90-a2f02ca09a1d","Type":"ContainerDied","Data":"ec77e96e4b16dd52deff8e759fa8aef00e108362f3813098410e77a68d68f3b0"} Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.460548 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7c70fbc7-53ed-416b-a3f3-754b465569d7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.472821 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw8rp\" (UniqueName: \"kubernetes.io/projected/7c70fbc7-53ed-416b-a3f3-754b465569d7-kube-api-access-nw8rp\") pod \"marketplace-operator-79b997595-fsj8b\" (UID: \"7c70fbc7-53ed-416b-a3f3-754b465569d7\") " pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.546016 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.625081 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.729397 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.734496 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.735773 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.760902 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-utilities\") pod \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.760980 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6b8x\" (UniqueName: \"kubernetes.io/projected/43206429-01b9-4d6c-8f90-a2f02ca09a1d-kube-api-access-k6b8x\") pod \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.761029 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-catalog-content\") pod \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\" (UID: \"43206429-01b9-4d6c-8f90-a2f02ca09a1d\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.762166 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-utilities" (OuterVolumeSpecName: "utilities") pod "43206429-01b9-4d6c-8f90-a2f02ca09a1d" (UID: "43206429-01b9-4d6c-8f90-a2f02ca09a1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.762481 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.771408 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43206429-01b9-4d6c-8f90-a2f02ca09a1d-kube-api-access-k6b8x" (OuterVolumeSpecName: "kube-api-access-k6b8x") pod "43206429-01b9-4d6c-8f90-a2f02ca09a1d" (UID: "43206429-01b9-4d6c-8f90-a2f02ca09a1d"). InnerVolumeSpecName "kube-api-access-k6b8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.781151 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.852343 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43206429-01b9-4d6c-8f90-a2f02ca09a1d" (UID: "43206429-01b9-4d6c-8f90-a2f02ca09a1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863143 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-catalog-content\") pod \"c81c9b40-801b-4b14-84c3-e684bbdae002\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863218 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skqjz\" (UniqueName: \"kubernetes.io/projected/821ef1e1-2128-4c28-9030-8faacb7d5fb7-kube-api-access-skqjz\") pod \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863274 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-utilities\") pod \"9b9f33d4-46f4-47e3-a852-9dd264924080\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863300 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-catalog-content\") pod \"b8705687-8217-4b9f-bed8-a293b8a041b0\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863360 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc9zv\" (UniqueName: \"kubernetes.io/projected/c81c9b40-801b-4b14-84c3-e684bbdae002-kube-api-access-nc9zv\") pod \"c81c9b40-801b-4b14-84c3-e684bbdae002\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863416 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-utilities\") pod \"c81c9b40-801b-4b14-84c3-e684bbdae002\" (UID: \"c81c9b40-801b-4b14-84c3-e684bbdae002\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863956 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-operator-metrics\") pod \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863978 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-catalog-content\") pod \"9b9f33d4-46f4-47e3-a852-9dd264924080\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.863998 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhm7d\" (UniqueName: \"kubernetes.io/projected/b8705687-8217-4b9f-bed8-a293b8a041b0-kube-api-access-fhm7d\") pod \"b8705687-8217-4b9f-bed8-a293b8a041b0\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.864016 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-utilities\") pod \"b8705687-8217-4b9f-bed8-a293b8a041b0\" (UID: \"b8705687-8217-4b9f-bed8-a293b8a041b0\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.864251 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-utilities" (OuterVolumeSpecName: "utilities") pod "9b9f33d4-46f4-47e3-a852-9dd264924080" (UID: "9b9f33d4-46f4-47e3-a852-9dd264924080"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.864708 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-utilities" (OuterVolumeSpecName: "utilities") pod "c81c9b40-801b-4b14-84c3-e684bbdae002" (UID: "c81c9b40-801b-4b14-84c3-e684bbdae002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.865036 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-trusted-ca\") pod \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\" (UID: \"821ef1e1-2128-4c28-9030-8faacb7d5fb7\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.865096 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fkzl\" (UniqueName: \"kubernetes.io/projected/9b9f33d4-46f4-47e3-a852-9dd264924080-kube-api-access-4fkzl\") pod \"9b9f33d4-46f4-47e3-a852-9dd264924080\" (UID: \"9b9f33d4-46f4-47e3-a852-9dd264924080\") " Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.865417 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6b8x\" (UniqueName: \"kubernetes.io/projected/43206429-01b9-4d6c-8f90-a2f02ca09a1d-kube-api-access-k6b8x\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.865434 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43206429-01b9-4d6c-8f90-a2f02ca09a1d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.870074 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8705687-8217-4b9f-bed8-a293b8a041b0-kube-api-access-fhm7d" (OuterVolumeSpecName: "kube-api-access-fhm7d") pod "b8705687-8217-4b9f-bed8-a293b8a041b0" (UID: "b8705687-8217-4b9f-bed8-a293b8a041b0"). InnerVolumeSpecName "kube-api-access-fhm7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.871828 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "821ef1e1-2128-4c28-9030-8faacb7d5fb7" (UID: "821ef1e1-2128-4c28-9030-8faacb7d5fb7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.873334 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "821ef1e1-2128-4c28-9030-8faacb7d5fb7" (UID: "821ef1e1-2128-4c28-9030-8faacb7d5fb7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.873624 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821ef1e1-2128-4c28-9030-8faacb7d5fb7-kube-api-access-skqjz" (OuterVolumeSpecName: "kube-api-access-skqjz") pod "821ef1e1-2128-4c28-9030-8faacb7d5fb7" (UID: "821ef1e1-2128-4c28-9030-8faacb7d5fb7"). InnerVolumeSpecName "kube-api-access-skqjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.873708 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81c9b40-801b-4b14-84c3-e684bbdae002-kube-api-access-nc9zv" (OuterVolumeSpecName: "kube-api-access-nc9zv") pod "c81c9b40-801b-4b14-84c3-e684bbdae002" (UID: "c81c9b40-801b-4b14-84c3-e684bbdae002"). InnerVolumeSpecName "kube-api-access-nc9zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.873824 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9f33d4-46f4-47e3-a852-9dd264924080-kube-api-access-4fkzl" (OuterVolumeSpecName: "kube-api-access-4fkzl") pod "9b9f33d4-46f4-47e3-a852-9dd264924080" (UID: "9b9f33d4-46f4-47e3-a852-9dd264924080"). InnerVolumeSpecName "kube-api-access-4fkzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.877643 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-utilities" (OuterVolumeSpecName: "utilities") pod "b8705687-8217-4b9f-bed8-a293b8a041b0" (UID: "b8705687-8217-4b9f-bed8-a293b8a041b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.888137 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c81c9b40-801b-4b14-84c3-e684bbdae002" (UID: "c81c9b40-801b-4b14-84c3-e684bbdae002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.933027 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8705687-8217-4b9f-bed8-a293b8a041b0" (UID: "b8705687-8217-4b9f-bed8-a293b8a041b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.966972 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.966999 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967010 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc9zv\" (UniqueName: \"kubernetes.io/projected/c81c9b40-801b-4b14-84c3-e684bbdae002-kube-api-access-nc9zv\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967018 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967026 4888 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967035 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhm7d\" (UniqueName: \"kubernetes.io/projected/b8705687-8217-4b9f-bed8-a293b8a041b0-kube-api-access-fhm7d\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967043 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8705687-8217-4b9f-bed8-a293b8a041b0-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967050 4888 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/821ef1e1-2128-4c28-9030-8faacb7d5fb7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967060 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fkzl\" (UniqueName: \"kubernetes.io/projected/9b9f33d4-46f4-47e3-a852-9dd264924080-kube-api-access-4fkzl\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967092 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81c9b40-801b-4b14-84c3-e684bbdae002-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.967100 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skqjz\" (UniqueName: \"kubernetes.io/projected/821ef1e1-2128-4c28-9030-8faacb7d5fb7-kube-api-access-skqjz\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:04 crc kubenswrapper[4888]: I1006 15:05:04.991819 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fsj8b"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.010511 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b9f33d4-46f4-47e3-a852-9dd264924080" (UID: "9b9f33d4-46f4-47e3-a852-9dd264924080"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.068576 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9f33d4-46f4-47e3-a852-9dd264924080-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.465102 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bsv89" event={"ID":"43206429-01b9-4d6c-8f90-a2f02ca09a1d","Type":"ContainerDied","Data":"9a0a808c99a6dc6bc5827169a791cc0086cf6a2441a1c8639a24b7b6fcc7e5cf"} Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.465150 4888 scope.go:117] "RemoveContainer" containerID="ec77e96e4b16dd52deff8e759fa8aef00e108362f3813098410e77a68d68f3b0" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.466260 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bsv89" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.467392 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zt5hl" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.468025 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zt5hl" event={"ID":"c81c9b40-801b-4b14-84c3-e684bbdae002","Type":"ContainerDied","Data":"2ddc02d2afc764b938c860e1794b46bb2ee0d0aae80e28abff25b076a662b301"} Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.474360 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7mdg" event={"ID":"9b9f33d4-46f4-47e3-a852-9dd264924080","Type":"ContainerDied","Data":"ab71b6408f0d3cd061e977c6756b63d74638b00b242aadbfee07eb9dda804b85"} Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.474458 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7mdg" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.478058 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" event={"ID":"7c70fbc7-53ed-416b-a3f3-754b465569d7","Type":"ContainerStarted","Data":"7f62fc1e9c47e160789ae9b613713e3669556704244a28006c297894f6f99d63"} Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.478083 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" event={"ID":"7c70fbc7-53ed-416b-a3f3-754b465569d7","Type":"ContainerStarted","Data":"3cc69d1fe0002355fed7ddc0f75bf95306a03a2e20ff3356189866b7039d8d3e"} Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.478750 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.481478 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" event={"ID":"821ef1e1-2128-4c28-9030-8faacb7d5fb7","Type":"ContainerDied","Data":"df5b569edcdb574f49df3686be9604b624ab4ad3bb5b42e5bd535f725cc9c01e"} Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.481521 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p26ff" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.482886 4888 scope.go:117] "RemoveContainer" containerID="bf020af5f4f8ab339c097bde3436ffa48e39ad2e49b728056c5f9efe2e701e69" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.483065 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.484846 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nfpw" event={"ID":"b8705687-8217-4b9f-bed8-a293b8a041b0","Type":"ContainerDied","Data":"fc763b7355175edbde3b88862949b47412fe31a7422ae32880fe140f5fec3874"} Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.484933 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nfpw" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.496356 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bsv89"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.502282 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bsv89"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.512370 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt5hl"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.514754 4888 scope.go:117] "RemoveContainer" containerID="3dffa297d91836fa33dab47c3122f6e935e57ba526cd5f165717ed0133fb860a" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.516057 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zt5hl"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.532528 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fsj8b" podStartSLOduration=1.532486749 podStartE2EDuration="1.532486749s" podCreationTimestamp="2025-10-06 15:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:05:05.530265282 +0000 UTC m=+245.342616010" watchObservedRunningTime="2025-10-06 15:05:05.532486749 +0000 UTC m=+245.344837467" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.539988 4888 scope.go:117] "RemoveContainer" containerID="c0c0abe56452fdb6a9b6b39a7891d7907ac3a4594fbd16a077c8c64cad7f1ad4" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.548065 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p26ff"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.556587 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p26ff"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.569782 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7mdg"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.571260 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7mdg"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.573227 4888 scope.go:117] "RemoveContainer" containerID="7346e2fd1cf4cc6b41a438e1dd390011f2549841de061a487ba7cdaa4255db82" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.582162 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nfpw"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.585374 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5nfpw"] Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.607206 4888 scope.go:117] "RemoveContainer" containerID="4ab00d63ab344fd20927a5639bfcf2774a0c888643e538d7a1e2c570d3f14688" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.623394 4888 scope.go:117] "RemoveContainer" containerID="c5cf62f84013a375421645c57bf66bce3008ce95f52128cfb70795b3a3df6280" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.639572 4888 scope.go:117] "RemoveContainer" containerID="ce0b85cac813e0a3f5ce1c5daacc803e1ac1a3c4ce2c694f817a08d50166f477" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.652631 4888 scope.go:117] "RemoveContainer" containerID="0597412d9bacd7c1c357c85d69af10ebe3f6aea8253178dd3c8f66b6f0a6cfbe" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.665929 4888 scope.go:117] "RemoveContainer" containerID="9047af1b88f2929e7e6ef0ed9e258db71888d8e6e8ba78dab50b616d3631b262" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.679989 4888 scope.go:117] "RemoveContainer" containerID="f2028032daa938eb56f13c0ef9ba06fa222e2045ef5cbceb3a864784b0c84735" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.691064 4888 scope.go:117] "RemoveContainer" containerID="bbd1ddd29bd0308c6b0259cbe8b07f10e619a2e95bd55d0faa73d7b24004f6fb" Oct 06 15:05:05 crc kubenswrapper[4888]: I1006 15:05:05.704136 4888 scope.go:117] "RemoveContainer" containerID="7822273e1cfbcc6512d318b1e804056286ed4a87c0cf51de7a3c3910a53c2550" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386006 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxpqx"] Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386508 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerName="extract-utilities" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386527 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerName="extract-utilities" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386541 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386550 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386563 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386573 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386589 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerName="extract-utilities" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386599 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerName="extract-utilities" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386614 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerName="extract-content" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386622 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerName="extract-content" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386636 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386644 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386655 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerName="extract-content" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386663 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerName="extract-content" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386674 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386682 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386692 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386700 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386711 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerName="extract-utilities" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386720 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerName="extract-utilities" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386733 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerName="extract-content" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386744 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerName="extract-content" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386762 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerName="extract-utilities" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386772 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerName="extract-utilities" Oct 06 15:05:06 crc kubenswrapper[4888]: E1006 15:05:06.386786 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerName="extract-content" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386821 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerName="extract-content" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386950 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" containerName="marketplace-operator" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386970 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386984 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.386998 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.387013 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" containerName="registry-server" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.388076 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.392825 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.398439 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxpqx"] Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.484913 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82adbb31-e074-49d1-97fa-123bde18a0b8-catalog-content\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.486184 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9b8j\" (UniqueName: \"kubernetes.io/projected/82adbb31-e074-49d1-97fa-123bde18a0b8-kube-api-access-q9b8j\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.486232 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82adbb31-e074-49d1-97fa-123bde18a0b8-utilities\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.592587 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82adbb31-e074-49d1-97fa-123bde18a0b8-catalog-content\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.593018 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9b8j\" (UniqueName: \"kubernetes.io/projected/82adbb31-e074-49d1-97fa-123bde18a0b8-kube-api-access-q9b8j\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.593147 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82adbb31-e074-49d1-97fa-123bde18a0b8-utilities\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.595139 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82adbb31-e074-49d1-97fa-123bde18a0b8-catalog-content\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.595343 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82adbb31-e074-49d1-97fa-123bde18a0b8-utilities\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.606031 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-26d4p"] Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.607214 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.608197 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-26d4p"] Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.611819 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.619175 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9b8j\" (UniqueName: \"kubernetes.io/projected/82adbb31-e074-49d1-97fa-123bde18a0b8-kube-api-access-q9b8j\") pod \"redhat-marketplace-pxpqx\" (UID: \"82adbb31-e074-49d1-97fa-123bde18a0b8\") " pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.694178 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq56k\" (UniqueName: \"kubernetes.io/projected/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-kube-api-access-mq56k\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.694296 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-utilities\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.694346 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-catalog-content\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.705894 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.795809 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-utilities\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.796082 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-catalog-content\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.796119 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq56k\" (UniqueName: \"kubernetes.io/projected/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-kube-api-access-mq56k\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.796726 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-catalog-content\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.799107 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-utilities\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.818687 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq56k\" (UniqueName: \"kubernetes.io/projected/05a6df39-9337-4af7-84cb-bdf2ab5fa1e0-kube-api-access-mq56k\") pod \"redhat-operators-26d4p\" (UID: \"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0\") " pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.928527 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43206429-01b9-4d6c-8f90-a2f02ca09a1d" path="/var/lib/kubelet/pods/43206429-01b9-4d6c-8f90-a2f02ca09a1d/volumes" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.929414 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821ef1e1-2128-4c28-9030-8faacb7d5fb7" path="/var/lib/kubelet/pods/821ef1e1-2128-4c28-9030-8faacb7d5fb7/volumes" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.929853 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9f33d4-46f4-47e3-a852-9dd264924080" path="/var/lib/kubelet/pods/9b9f33d4-46f4-47e3-a852-9dd264924080/volumes" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.930876 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8705687-8217-4b9f-bed8-a293b8a041b0" path="/var/lib/kubelet/pods/b8705687-8217-4b9f-bed8-a293b8a041b0/volumes" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.931400 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81c9b40-801b-4b14-84c3-e684bbdae002" path="/var/lib/kubelet/pods/c81c9b40-801b-4b14-84c3-e684bbdae002/volumes" Oct 06 15:05:06 crc kubenswrapper[4888]: I1006 15:05:06.944517 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:07 crc kubenswrapper[4888]: I1006 15:05:07.099890 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxpqx"] Oct 06 15:05:07 crc kubenswrapper[4888]: W1006 15:05:07.106414 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82adbb31_e074_49d1_97fa_123bde18a0b8.slice/crio-7ff26fb309a42fdf39f4e2799f38353aa4205dbdeeb959f6d6ad6169f45d86a6 WatchSource:0}: Error finding container 7ff26fb309a42fdf39f4e2799f38353aa4205dbdeeb959f6d6ad6169f45d86a6: Status 404 returned error can't find the container with id 7ff26fb309a42fdf39f4e2799f38353aa4205dbdeeb959f6d6ad6169f45d86a6 Oct 06 15:05:07 crc kubenswrapper[4888]: I1006 15:05:07.333768 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-26d4p"] Oct 06 15:05:07 crc kubenswrapper[4888]: W1006 15:05:07.338228 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05a6df39_9337_4af7_84cb_bdf2ab5fa1e0.slice/crio-72749f4b932fd82e60ef9a21a652f79349efc68e541a5a058fd090d4c335b12d WatchSource:0}: Error finding container 72749f4b932fd82e60ef9a21a652f79349efc68e541a5a058fd090d4c335b12d: Status 404 returned error can't find the container with id 72749f4b932fd82e60ef9a21a652f79349efc68e541a5a058fd090d4c335b12d Oct 06 15:05:07 crc kubenswrapper[4888]: I1006 15:05:07.501507 4888 generic.go:334] "Generic (PLEG): container finished" podID="82adbb31-e074-49d1-97fa-123bde18a0b8" containerID="0b77e35be22f9fcec0d41dd2cf85bf5d7afb128891fafe0206ebd7d59bd55da1" exitCode=0 Oct 06 15:05:07 crc kubenswrapper[4888]: I1006 15:05:07.501735 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxpqx" event={"ID":"82adbb31-e074-49d1-97fa-123bde18a0b8","Type":"ContainerDied","Data":"0b77e35be22f9fcec0d41dd2cf85bf5d7afb128891fafe0206ebd7d59bd55da1"} Oct 06 15:05:07 crc kubenswrapper[4888]: I1006 15:05:07.502644 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxpqx" event={"ID":"82adbb31-e074-49d1-97fa-123bde18a0b8","Type":"ContainerStarted","Data":"7ff26fb309a42fdf39f4e2799f38353aa4205dbdeeb959f6d6ad6169f45d86a6"} Oct 06 15:05:07 crc kubenswrapper[4888]: I1006 15:05:07.505347 4888 generic.go:334] "Generic (PLEG): container finished" podID="05a6df39-9337-4af7-84cb-bdf2ab5fa1e0" containerID="ad3e6e31904d53e9819a192d51cc2609af582e5e9af58b0dfeef8be98e603bd3" exitCode=0 Oct 06 15:05:07 crc kubenswrapper[4888]: I1006 15:05:07.505894 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26d4p" event={"ID":"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0","Type":"ContainerDied","Data":"ad3e6e31904d53e9819a192d51cc2609af582e5e9af58b0dfeef8be98e603bd3"} Oct 06 15:05:07 crc kubenswrapper[4888]: I1006 15:05:07.505945 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26d4p" event={"ID":"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0","Type":"ContainerStarted","Data":"72749f4b932fd82e60ef9a21a652f79349efc68e541a5a058fd090d4c335b12d"} Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.512862 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26d4p" event={"ID":"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0","Type":"ContainerStarted","Data":"bc68ded935a34deef2a0b34a5311395885e2f83d4194179d4ca0040b0cf1d825"} Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.515892 4888 generic.go:334] "Generic (PLEG): container finished" podID="82adbb31-e074-49d1-97fa-123bde18a0b8" containerID="adb16a1d55a35856129d32ec73d79f82efc4e7e759e185c789a94fb4d0970f7d" exitCode=0 Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.515947 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxpqx" event={"ID":"82adbb31-e074-49d1-97fa-123bde18a0b8","Type":"ContainerDied","Data":"adb16a1d55a35856129d32ec73d79f82efc4e7e759e185c789a94fb4d0970f7d"} Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.793263 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hlr5g"] Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.794399 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.796272 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.797159 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hlr5g"] Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.934710 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02095982-7d9a-470d-a5e4-ddec41a38a36-catalog-content\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.935035 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74vd\" (UniqueName: \"kubernetes.io/projected/02095982-7d9a-470d-a5e4-ddec41a38a36-kube-api-access-k74vd\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.935057 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02095982-7d9a-470d-a5e4-ddec41a38a36-utilities\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.997658 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4b2n"] Oct 06 15:05:08 crc kubenswrapper[4888]: I1006 15:05:08.998828 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.000555 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.002714 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4b2n"] Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.036569 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k74vd\" (UniqueName: \"kubernetes.io/projected/02095982-7d9a-470d-a5e4-ddec41a38a36-kube-api-access-k74vd\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.036708 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02095982-7d9a-470d-a5e4-ddec41a38a36-utilities\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.036869 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02095982-7d9a-470d-a5e4-ddec41a38a36-catalog-content\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.037170 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02095982-7d9a-470d-a5e4-ddec41a38a36-utilities\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.037303 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02095982-7d9a-470d-a5e4-ddec41a38a36-catalog-content\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.059102 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k74vd\" (UniqueName: \"kubernetes.io/projected/02095982-7d9a-470d-a5e4-ddec41a38a36-kube-api-access-k74vd\") pod \"community-operators-hlr5g\" (UID: \"02095982-7d9a-470d-a5e4-ddec41a38a36\") " pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.116915 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.137508 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874bd15d-af29-4293-a17d-27c424806052-utilities\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.137630 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhh4\" (UniqueName: \"kubernetes.io/projected/874bd15d-af29-4293-a17d-27c424806052-kube-api-access-fqhh4\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.137681 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874bd15d-af29-4293-a17d-27c424806052-catalog-content\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.238413 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874bd15d-af29-4293-a17d-27c424806052-utilities\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.238684 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqhh4\" (UniqueName: \"kubernetes.io/projected/874bd15d-af29-4293-a17d-27c424806052-kube-api-access-fqhh4\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.238704 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874bd15d-af29-4293-a17d-27c424806052-catalog-content\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.239347 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/874bd15d-af29-4293-a17d-27c424806052-catalog-content\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.239646 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/874bd15d-af29-4293-a17d-27c424806052-utilities\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.257623 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqhh4\" (UniqueName: \"kubernetes.io/projected/874bd15d-af29-4293-a17d-27c424806052-kube-api-access-fqhh4\") pod \"certified-operators-j4b2n\" (UID: \"874bd15d-af29-4293-a17d-27c424806052\") " pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.326142 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.524704 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hlr5g"] Oct 06 15:05:09 crc kubenswrapper[4888]: W1006 15:05:09.540886 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02095982_7d9a_470d_a5e4_ddec41a38a36.slice/crio-2d1ff30038b0e5f4257a4d174b1c471c5bc24094cb3bfb52084baa008697c4c0 WatchSource:0}: Error finding container 2d1ff30038b0e5f4257a4d174b1c471c5bc24094cb3bfb52084baa008697c4c0: Status 404 returned error can't find the container with id 2d1ff30038b0e5f4257a4d174b1c471c5bc24094cb3bfb52084baa008697c4c0 Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.540992 4888 generic.go:334] "Generic (PLEG): container finished" podID="05a6df39-9337-4af7-84cb-bdf2ab5fa1e0" containerID="bc68ded935a34deef2a0b34a5311395885e2f83d4194179d4ca0040b0cf1d825" exitCode=0 Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.541068 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26d4p" event={"ID":"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0","Type":"ContainerDied","Data":"bc68ded935a34deef2a0b34a5311395885e2f83d4194179d4ca0040b0cf1d825"} Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.548228 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4b2n"] Oct 06 15:05:09 crc kubenswrapper[4888]: I1006 15:05:09.553783 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxpqx" event={"ID":"82adbb31-e074-49d1-97fa-123bde18a0b8","Type":"ContainerStarted","Data":"8f98c1b252b602e92ec6dee4a1e9a95c589df4b25f057e625d8ee9b9e79d944f"} Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.558749 4888 generic.go:334] "Generic (PLEG): container finished" podID="02095982-7d9a-470d-a5e4-ddec41a38a36" containerID="3d68b25ae1d55f1f04a041e5489e8a4aabc841258688c4ad808a90e8b83bbdce" exitCode=0 Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.559118 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hlr5g" event={"ID":"02095982-7d9a-470d-a5e4-ddec41a38a36","Type":"ContainerDied","Data":"3d68b25ae1d55f1f04a041e5489e8a4aabc841258688c4ad808a90e8b83bbdce"} Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.559152 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hlr5g" event={"ID":"02095982-7d9a-470d-a5e4-ddec41a38a36","Type":"ContainerStarted","Data":"2d1ff30038b0e5f4257a4d174b1c471c5bc24094cb3bfb52084baa008697c4c0"} Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.561495 4888 generic.go:334] "Generic (PLEG): container finished" podID="874bd15d-af29-4293-a17d-27c424806052" containerID="161ac5c81354cef8e49141799522043832b6d28e93489e1a69496ada4d59c82d" exitCode=0 Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.561555 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4b2n" event={"ID":"874bd15d-af29-4293-a17d-27c424806052","Type":"ContainerDied","Data":"161ac5c81354cef8e49141799522043832b6d28e93489e1a69496ada4d59c82d"} Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.561580 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4b2n" event={"ID":"874bd15d-af29-4293-a17d-27c424806052","Type":"ContainerStarted","Data":"ca40c575917f93eb4e269d276f02682d1ff031bbf6f62acfe0c11d864b67d770"} Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.566325 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26d4p" event={"ID":"05a6df39-9337-4af7-84cb-bdf2ab5fa1e0","Type":"ContainerStarted","Data":"913722bf7245448858fae02bd1111b23921e042fdba6ef6fc599f52bddd63971"} Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.580543 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxpqx" podStartSLOduration=3.172150698 podStartE2EDuration="4.580527507s" podCreationTimestamp="2025-10-06 15:05:06 +0000 UTC" firstStartedPulling="2025-10-06 15:05:07.503185857 +0000 UTC m=+247.315536575" lastFinishedPulling="2025-10-06 15:05:08.911562666 +0000 UTC m=+248.723913384" observedRunningTime="2025-10-06 15:05:09.589045464 +0000 UTC m=+249.401396192" watchObservedRunningTime="2025-10-06 15:05:10.580527507 +0000 UTC m=+250.392878225" Oct 06 15:05:10 crc kubenswrapper[4888]: I1006 15:05:10.604718 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-26d4p" podStartSLOduration=2.067288326 podStartE2EDuration="4.604703085s" podCreationTimestamp="2025-10-06 15:05:06 +0000 UTC" firstStartedPulling="2025-10-06 15:05:07.508199702 +0000 UTC m=+247.320550410" lastFinishedPulling="2025-10-06 15:05:10.045614451 +0000 UTC m=+249.857965169" observedRunningTime="2025-10-06 15:05:10.602940644 +0000 UTC m=+250.415291362" watchObservedRunningTime="2025-10-06 15:05:10.604703085 +0000 UTC m=+250.417053803" Oct 06 15:05:11 crc kubenswrapper[4888]: I1006 15:05:11.575720 4888 generic.go:334] "Generic (PLEG): container finished" podID="02095982-7d9a-470d-a5e4-ddec41a38a36" containerID="f310afe146fe3511b221270052c81a918225689e957e9140bd77165edf4894c6" exitCode=0 Oct 06 15:05:11 crc kubenswrapper[4888]: I1006 15:05:11.577243 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hlr5g" event={"ID":"02095982-7d9a-470d-a5e4-ddec41a38a36","Type":"ContainerDied","Data":"f310afe146fe3511b221270052c81a918225689e957e9140bd77165edf4894c6"} Oct 06 15:05:11 crc kubenswrapper[4888]: I1006 15:05:11.581000 4888 generic.go:334] "Generic (PLEG): container finished" podID="874bd15d-af29-4293-a17d-27c424806052" containerID="217959c9f74aff51ab46fc6108d29ea771d786397c0b313a68c49797d0825a6f" exitCode=0 Oct 06 15:05:11 crc kubenswrapper[4888]: I1006 15:05:11.581813 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4b2n" event={"ID":"874bd15d-af29-4293-a17d-27c424806052","Type":"ContainerDied","Data":"217959c9f74aff51ab46fc6108d29ea771d786397c0b313a68c49797d0825a6f"} Oct 06 15:05:13 crc kubenswrapper[4888]: I1006 15:05:13.593814 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hlr5g" event={"ID":"02095982-7d9a-470d-a5e4-ddec41a38a36","Type":"ContainerStarted","Data":"4e5bfe08f4c5806040ffb14f9ed88d1b3e77d684aecd3e593c9752bb8ae245d8"} Oct 06 15:05:13 crc kubenswrapper[4888]: I1006 15:05:13.598545 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4b2n" event={"ID":"874bd15d-af29-4293-a17d-27c424806052","Type":"ContainerStarted","Data":"7b62f7854c9832c2e1717c688afb40f6e8f668d844099db58cb73b749809c56c"} Oct 06 15:05:13 crc kubenswrapper[4888]: I1006 15:05:13.614953 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hlr5g" podStartSLOduration=4.067981262 podStartE2EDuration="5.614789807s" podCreationTimestamp="2025-10-06 15:05:08 +0000 UTC" firstStartedPulling="2025-10-06 15:05:10.561380906 +0000 UTC m=+250.373731624" lastFinishedPulling="2025-10-06 15:05:12.108189451 +0000 UTC m=+251.920540169" observedRunningTime="2025-10-06 15:05:13.614183266 +0000 UTC m=+253.426533984" watchObservedRunningTime="2025-10-06 15:05:13.614789807 +0000 UTC m=+253.427140525" Oct 06 15:05:13 crc kubenswrapper[4888]: I1006 15:05:13.645397 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4b2n" podStartSLOduration=4.153646526 podStartE2EDuration="5.64537882s" podCreationTimestamp="2025-10-06 15:05:08 +0000 UTC" firstStartedPulling="2025-10-06 15:05:10.562735183 +0000 UTC m=+250.375085901" lastFinishedPulling="2025-10-06 15:05:12.054467477 +0000 UTC m=+251.866818195" observedRunningTime="2025-10-06 15:05:13.643791405 +0000 UTC m=+253.456142123" watchObservedRunningTime="2025-10-06 15:05:13.64537882 +0000 UTC m=+253.457729538" Oct 06 15:05:16 crc kubenswrapper[4888]: I1006 15:05:16.706824 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:16 crc kubenswrapper[4888]: I1006 15:05:16.706899 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:16 crc kubenswrapper[4888]: I1006 15:05:16.756974 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:16 crc kubenswrapper[4888]: I1006 15:05:16.945189 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:16 crc kubenswrapper[4888]: I1006 15:05:16.945248 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:16 crc kubenswrapper[4888]: I1006 15:05:16.987086 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:17 crc kubenswrapper[4888]: I1006 15:05:17.654666 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-26d4p" Oct 06 15:05:17 crc kubenswrapper[4888]: I1006 15:05:17.656550 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxpqx" Oct 06 15:05:19 crc kubenswrapper[4888]: I1006 15:05:19.117841 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:19 crc kubenswrapper[4888]: I1006 15:05:19.118167 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:19 crc kubenswrapper[4888]: I1006 15:05:19.164495 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:05:19 crc kubenswrapper[4888]: I1006 15:05:19.327368 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:19 crc kubenswrapper[4888]: I1006 15:05:19.327423 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:19 crc kubenswrapper[4888]: I1006 15:05:19.364136 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:19 crc kubenswrapper[4888]: I1006 15:05:19.665045 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4b2n" Oct 06 15:05:19 crc kubenswrapper[4888]: I1006 15:05:19.668421 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hlr5g" Oct 06 15:06:32 crc kubenswrapper[4888]: I1006 15:06:32.564112 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:06:32 crc kubenswrapper[4888]: I1006 15:06:32.564582 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:07:02 crc kubenswrapper[4888]: I1006 15:07:02.564265 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:07:02 crc kubenswrapper[4888]: I1006 15:07:02.564899 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.595117 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rjlhz"] Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.596083 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.651622 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rjlhz"] Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.712969 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.713285 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-registry-tls\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.713327 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-bound-sa-token\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.713366 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0edfa034-577b-4397-a9c7-e1747c63cb72-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.713397 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbgnh\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-kube-api-access-xbgnh\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.713424 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0edfa034-577b-4397-a9c7-e1747c63cb72-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.713456 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0edfa034-577b-4397-a9c7-e1747c63cb72-trusted-ca\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.713475 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0edfa034-577b-4397-a9c7-e1747c63cb72-registry-certificates\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.737654 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.814400 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-registry-tls\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.814449 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-bound-sa-token\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.814478 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0edfa034-577b-4397-a9c7-e1747c63cb72-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.814504 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbgnh\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-kube-api-access-xbgnh\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.814531 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0edfa034-577b-4397-a9c7-e1747c63cb72-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.814555 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0edfa034-577b-4397-a9c7-e1747c63cb72-trusted-ca\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.814569 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0edfa034-577b-4397-a9c7-e1747c63cb72-registry-certificates\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.815356 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0edfa034-577b-4397-a9c7-e1747c63cb72-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.815945 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0edfa034-577b-4397-a9c7-e1747c63cb72-registry-certificates\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.816016 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0edfa034-577b-4397-a9c7-e1747c63cb72-trusted-ca\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.820650 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0edfa034-577b-4397-a9c7-e1747c63cb72-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.820682 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-registry-tls\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.835087 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-bound-sa-token\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.835980 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbgnh\" (UniqueName: \"kubernetes.io/projected/0edfa034-577b-4397-a9c7-e1747c63cb72-kube-api-access-xbgnh\") pod \"image-registry-66df7c8f76-rjlhz\" (UID: \"0edfa034-577b-4397-a9c7-e1747c63cb72\") " pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:06 crc kubenswrapper[4888]: I1006 15:07:06.912990 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:07 crc kubenswrapper[4888]: I1006 15:07:07.110285 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rjlhz"] Oct 06 15:07:07 crc kubenswrapper[4888]: I1006 15:07:07.164321 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" event={"ID":"0edfa034-577b-4397-a9c7-e1747c63cb72","Type":"ContainerStarted","Data":"f1848b83c7759a2743b41c2cd32a407f0ec0fcc0304ae12323f409494c2ab35e"} Oct 06 15:07:08 crc kubenswrapper[4888]: I1006 15:07:08.170602 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" event={"ID":"0edfa034-577b-4397-a9c7-e1747c63cb72","Type":"ContainerStarted","Data":"eba28f57110d6e911c228b92e4dc9d1fc713dcfa77b91d413e46266153cafd3d"} Oct 06 15:07:08 crc kubenswrapper[4888]: I1006 15:07:08.170929 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:08 crc kubenswrapper[4888]: I1006 15:07:08.193495 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" podStartSLOduration=2.193480152 podStartE2EDuration="2.193480152s" podCreationTimestamp="2025-10-06 15:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:07:08.192958744 +0000 UTC m=+368.005309472" watchObservedRunningTime="2025-10-06 15:07:08.193480152 +0000 UTC m=+368.005830870" Oct 06 15:07:26 crc kubenswrapper[4888]: I1006 15:07:26.917928 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rjlhz" Oct 06 15:07:26 crc kubenswrapper[4888]: I1006 15:07:26.970987 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j2xlv"] Oct 06 15:07:32 crc kubenswrapper[4888]: I1006 15:07:32.564311 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:07:32 crc kubenswrapper[4888]: I1006 15:07:32.564677 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:07:32 crc kubenswrapper[4888]: I1006 15:07:32.564737 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:07:32 crc kubenswrapper[4888]: I1006 15:07:32.565770 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fc88084e8dfb0c728018c0d641727f0db19b908b971796df23c7ddf2d6bca30"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:07:32 crc kubenswrapper[4888]: I1006 15:07:32.565960 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://0fc88084e8dfb0c728018c0d641727f0db19b908b971796df23c7ddf2d6bca30" gracePeriod=600 Oct 06 15:07:33 crc kubenswrapper[4888]: I1006 15:07:33.304119 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="0fc88084e8dfb0c728018c0d641727f0db19b908b971796df23c7ddf2d6bca30" exitCode=0 Oct 06 15:07:33 crc kubenswrapper[4888]: I1006 15:07:33.304214 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"0fc88084e8dfb0c728018c0d641727f0db19b908b971796df23c7ddf2d6bca30"} Oct 06 15:07:33 crc kubenswrapper[4888]: I1006 15:07:33.304373 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"e98908deb283e6a036eb37ab0790f5913cfd911db2848acf3a6ebbd35a13b160"} Oct 06 15:07:33 crc kubenswrapper[4888]: I1006 15:07:33.304390 4888 scope.go:117] "RemoveContainer" containerID="3bf224a565364b42ca08f4c058a7633064f4add4bd4b7d757035bbceffa7452a" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.010281 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" podUID="8fff9bc3-9673-46a4-8f88-56f9c24e16f1" containerName="registry" containerID="cri-o://29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4" gracePeriod=30 Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.036822 4888 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-j2xlv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.12:5000/healthz\": dial tcp 10.217.0.12:5000: connect: connection refused" start-of-body= Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.036915 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" podUID="8fff9bc3-9673-46a4-8f88-56f9c24e16f1" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.12:5000/healthz\": dial tcp 10.217.0.12:5000: connect: connection refused" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.313102 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.400435 4888 generic.go:334] "Generic (PLEG): container finished" podID="8fff9bc3-9673-46a4-8f88-56f9c24e16f1" containerID="29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4" exitCode=0 Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.400468 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" event={"ID":"8fff9bc3-9673-46a4-8f88-56f9c24e16f1","Type":"ContainerDied","Data":"29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4"} Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.400490 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" event={"ID":"8fff9bc3-9673-46a4-8f88-56f9c24e16f1","Type":"ContainerDied","Data":"5192c4fe6946138317283e91dd730c0b3183b38bb1e052f25b075e47e5faaa41"} Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.400524 4888 scope.go:117] "RemoveContainer" containerID="29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.400639 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j2xlv" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.419029 4888 scope.go:117] "RemoveContainer" containerID="29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4" Oct 06 15:07:52 crc kubenswrapper[4888]: E1006 15:07:52.421082 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4\": container with ID starting with 29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4 not found: ID does not exist" containerID="29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.421125 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4"} err="failed to get container status \"29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4\": rpc error: code = NotFound desc = could not find container \"29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4\": container with ID starting with 29abbc26a960a48dd1bff7d9ce41fed0e93da449bfcdbba61593d363e3c45cf4 not found: ID does not exist" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.444932 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-tls\") pod \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.444975 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-ca-trust-extracted\") pod \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.444995 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b4n7\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-kube-api-access-9b4n7\") pod \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.445014 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-trusted-ca\") pod \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.445052 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-installation-pull-secrets\") pod \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.445090 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-certificates\") pod \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.445244 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.445311 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-bound-sa-token\") pod \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\" (UID: \"8fff9bc3-9673-46a4-8f88-56f9c24e16f1\") " Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.446538 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8fff9bc3-9673-46a4-8f88-56f9c24e16f1" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.450045 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8fff9bc3-9673-46a4-8f88-56f9c24e16f1" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.451420 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8fff9bc3-9673-46a4-8f88-56f9c24e16f1" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.451487 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-kube-api-access-9b4n7" (OuterVolumeSpecName: "kube-api-access-9b4n7") pod "8fff9bc3-9673-46a4-8f88-56f9c24e16f1" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1"). InnerVolumeSpecName "kube-api-access-9b4n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.453785 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8fff9bc3-9673-46a4-8f88-56f9c24e16f1" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.456408 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8fff9bc3-9673-46a4-8f88-56f9c24e16f1" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.460127 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8fff9bc3-9673-46a4-8f88-56f9c24e16f1" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.463328 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8fff9bc3-9673-46a4-8f88-56f9c24e16f1" (UID: "8fff9bc3-9673-46a4-8f88-56f9c24e16f1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.547269 4888 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.547314 4888 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.547326 4888 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.547337 4888 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.547351 4888 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.547362 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b4n7\" (UniqueName: \"kubernetes.io/projected/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-kube-api-access-9b4n7\") on node \"crc\" DevicePath \"\"" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.547373 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8fff9bc3-9673-46a4-8f88-56f9c24e16f1-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.728385 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j2xlv"] Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.732973 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j2xlv"] Oct 06 15:07:52 crc kubenswrapper[4888]: I1006 15:07:52.928642 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fff9bc3-9673-46a4-8f88-56f9c24e16f1" path="/var/lib/kubelet/pods/8fff9bc3-9673-46a4-8f88-56f9c24e16f1/volumes" Oct 06 15:09:32 crc kubenswrapper[4888]: I1006 15:09:32.564095 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:09:32 crc kubenswrapper[4888]: I1006 15:09:32.564614 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:10:02 crc kubenswrapper[4888]: I1006 15:10:02.564222 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:10:02 crc kubenswrapper[4888]: I1006 15:10:02.565071 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.324766 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zz9pz"] Oct 06 15:10:19 crc kubenswrapper[4888]: E1006 15:10:19.325562 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fff9bc3-9673-46a4-8f88-56f9c24e16f1" containerName="registry" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.325592 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fff9bc3-9673-46a4-8f88-56f9c24e16f1" containerName="registry" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.325729 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fff9bc3-9673-46a4-8f88-56f9c24e16f1" containerName="registry" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.326396 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-zz9pz" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.334137 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.334640 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.334850 4888 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hvcsl" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.336868 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7fvrb"] Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.337852 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7fvrb" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.339476 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zz9pz"] Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.348324 4888 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xl2nr" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.352466 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7fvrb"] Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.374354 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pd9dj"] Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.375178 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.377781 4888 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-gzmpq" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.389475 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pd9dj"] Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.400700 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hhzn\" (UniqueName: \"kubernetes.io/projected/bd69d2c4-f9c6-4688-bd53-1ae4cbf36175-kube-api-access-5hhzn\") pod \"cert-manager-cainjector-7f985d654d-zz9pz\" (UID: \"bd69d2c4-f9c6-4688-bd53-1ae4cbf36175\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zz9pz" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.400785 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bh7s\" (UniqueName: \"kubernetes.io/projected/23c3654d-b36a-455a-a78c-3279d3ba9d23-kube-api-access-2bh7s\") pod \"cert-manager-5b446d88c5-7fvrb\" (UID: \"23c3654d-b36a-455a-a78c-3279d3ba9d23\") " pod="cert-manager/cert-manager-5b446d88c5-7fvrb" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.501936 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh7s\" (UniqueName: \"kubernetes.io/projected/23c3654d-b36a-455a-a78c-3279d3ba9d23-kube-api-access-2bh7s\") pod \"cert-manager-5b446d88c5-7fvrb\" (UID: \"23c3654d-b36a-455a-a78c-3279d3ba9d23\") " pod="cert-manager/cert-manager-5b446d88c5-7fvrb" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.502012 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcxx\" (UniqueName: \"kubernetes.io/projected/a76a2cfe-bd91-4f96-86b8-9449ebec89d9-kube-api-access-shcxx\") pod \"cert-manager-webhook-5655c58dd6-pd9dj\" (UID: \"a76a2cfe-bd91-4f96-86b8-9449ebec89d9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.502095 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hhzn\" (UniqueName: \"kubernetes.io/projected/bd69d2c4-f9c6-4688-bd53-1ae4cbf36175-kube-api-access-5hhzn\") pod \"cert-manager-cainjector-7f985d654d-zz9pz\" (UID: \"bd69d2c4-f9c6-4688-bd53-1ae4cbf36175\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zz9pz" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.520245 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hhzn\" (UniqueName: \"kubernetes.io/projected/bd69d2c4-f9c6-4688-bd53-1ae4cbf36175-kube-api-access-5hhzn\") pod \"cert-manager-cainjector-7f985d654d-zz9pz\" (UID: \"bd69d2c4-f9c6-4688-bd53-1ae4cbf36175\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zz9pz" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.520303 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bh7s\" (UniqueName: \"kubernetes.io/projected/23c3654d-b36a-455a-a78c-3279d3ba9d23-kube-api-access-2bh7s\") pod \"cert-manager-5b446d88c5-7fvrb\" (UID: \"23c3654d-b36a-455a-a78c-3279d3ba9d23\") " pod="cert-manager/cert-manager-5b446d88c5-7fvrb" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.603269 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shcxx\" (UniqueName: \"kubernetes.io/projected/a76a2cfe-bd91-4f96-86b8-9449ebec89d9-kube-api-access-shcxx\") pod \"cert-manager-webhook-5655c58dd6-pd9dj\" (UID: \"a76a2cfe-bd91-4f96-86b8-9449ebec89d9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.617928 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shcxx\" (UniqueName: \"kubernetes.io/projected/a76a2cfe-bd91-4f96-86b8-9449ebec89d9-kube-api-access-shcxx\") pod \"cert-manager-webhook-5655c58dd6-pd9dj\" (UID: \"a76a2cfe-bd91-4f96-86b8-9449ebec89d9\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.651687 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-zz9pz" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.663355 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7fvrb" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.689015 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.891130 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zz9pz"] Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.909893 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:10:19 crc kubenswrapper[4888]: I1006 15:10:19.946646 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7fvrb"] Oct 06 15:10:19 crc kubenswrapper[4888]: W1006 15:10:19.954878 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23c3654d_b36a_455a_a78c_3279d3ba9d23.slice/crio-29af31862b3a89a16ca28789cbb2d0871f3892120647178ab6868180f4dff2f4 WatchSource:0}: Error finding container 29af31862b3a89a16ca28789cbb2d0871f3892120647178ab6868180f4dff2f4: Status 404 returned error can't find the container with id 29af31862b3a89a16ca28789cbb2d0871f3892120647178ab6868180f4dff2f4 Oct 06 15:10:20 crc kubenswrapper[4888]: I1006 15:10:20.004745 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-pd9dj"] Oct 06 15:10:20 crc kubenswrapper[4888]: W1006 15:10:20.008135 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda76a2cfe_bd91_4f96_86b8_9449ebec89d9.slice/crio-1917ac835b940d9078868ca14d8ed49866780d7e4ce2f12e94706426619c6ed7 WatchSource:0}: Error finding container 1917ac835b940d9078868ca14d8ed49866780d7e4ce2f12e94706426619c6ed7: Status 404 returned error can't find the container with id 1917ac835b940d9078868ca14d8ed49866780d7e4ce2f12e94706426619c6ed7 Oct 06 15:10:20 crc kubenswrapper[4888]: I1006 15:10:20.110493 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7fvrb" event={"ID":"23c3654d-b36a-455a-a78c-3279d3ba9d23","Type":"ContainerStarted","Data":"29af31862b3a89a16ca28789cbb2d0871f3892120647178ab6868180f4dff2f4"} Oct 06 15:10:20 crc kubenswrapper[4888]: I1006 15:10:20.111882 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" event={"ID":"a76a2cfe-bd91-4f96-86b8-9449ebec89d9","Type":"ContainerStarted","Data":"1917ac835b940d9078868ca14d8ed49866780d7e4ce2f12e94706426619c6ed7"} Oct 06 15:10:20 crc kubenswrapper[4888]: I1006 15:10:20.112726 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-zz9pz" event={"ID":"bd69d2c4-f9c6-4688-bd53-1ae4cbf36175","Type":"ContainerStarted","Data":"139c561d52ef4748446f7b4aa0fe17867f81e4ad70614a888e6d3247c3cde85e"} Oct 06 15:10:24 crc kubenswrapper[4888]: I1006 15:10:24.135402 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-zz9pz" event={"ID":"bd69d2c4-f9c6-4688-bd53-1ae4cbf36175","Type":"ContainerStarted","Data":"f4cc076c5041b4f4d82f0ad6b096918f9229525bdd9753c74fccac6f24fa92e8"} Oct 06 15:10:24 crc kubenswrapper[4888]: I1006 15:10:24.137842 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" event={"ID":"a76a2cfe-bd91-4f96-86b8-9449ebec89d9","Type":"ContainerStarted","Data":"05d17fe863f344f41e0a636cd47c596126ade7c52be30203516a8db21a2ef193"} Oct 06 15:10:24 crc kubenswrapper[4888]: I1006 15:10:24.138064 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" Oct 06 15:10:24 crc kubenswrapper[4888]: I1006 15:10:24.139898 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7fvrb" event={"ID":"23c3654d-b36a-455a-a78c-3279d3ba9d23","Type":"ContainerStarted","Data":"180fe596a6cfc97d8d0aeb354545af7ff9d3b6e414a386a5289ee3bce5cb8352"} Oct 06 15:10:24 crc kubenswrapper[4888]: I1006 15:10:24.154884 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-zz9pz" podStartSLOduration=1.796604593 podStartE2EDuration="5.154865506s" podCreationTimestamp="2025-10-06 15:10:19 +0000 UTC" firstStartedPulling="2025-10-06 15:10:19.907486003 +0000 UTC m=+559.719836721" lastFinishedPulling="2025-10-06 15:10:23.265746916 +0000 UTC m=+563.078097634" observedRunningTime="2025-10-06 15:10:24.152588014 +0000 UTC m=+563.964938752" watchObservedRunningTime="2025-10-06 15:10:24.154865506 +0000 UTC m=+563.967216224" Oct 06 15:10:24 crc kubenswrapper[4888]: I1006 15:10:24.171403 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-7fvrb" podStartSLOduration=1.952176697 podStartE2EDuration="5.171384596s" podCreationTimestamp="2025-10-06 15:10:19 +0000 UTC" firstStartedPulling="2025-10-06 15:10:19.95731644 +0000 UTC m=+559.769667158" lastFinishedPulling="2025-10-06 15:10:23.176524349 +0000 UTC m=+562.988875057" observedRunningTime="2025-10-06 15:10:24.169939541 +0000 UTC m=+563.982290279" watchObservedRunningTime="2025-10-06 15:10:24.171384596 +0000 UTC m=+563.983735314" Oct 06 15:10:24 crc kubenswrapper[4888]: I1006 15:10:24.197188 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" podStartSLOduration=2.029771517 podStartE2EDuration="5.197154026s" podCreationTimestamp="2025-10-06 15:10:19 +0000 UTC" firstStartedPulling="2025-10-06 15:10:20.010481762 +0000 UTC m=+559.822832480" lastFinishedPulling="2025-10-06 15:10:23.177864281 +0000 UTC m=+562.990214989" observedRunningTime="2025-10-06 15:10:24.193506922 +0000 UTC m=+564.005857640" watchObservedRunningTime="2025-10-06 15:10:24.197154026 +0000 UTC m=+564.009504784" Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.692336 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-pd9dj" Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.724155 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hzx2q"] Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.724988 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovn-controller" containerID="cri-o://b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9" gracePeriod=30 Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.731141 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="sbdb" containerID="cri-o://16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad" gracePeriod=30 Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.731204 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="nbdb" containerID="cri-o://642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3" gracePeriod=30 Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.731238 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="northd" containerID="cri-o://91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166" gracePeriod=30 Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.731269 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79" gracePeriod=30 Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.731300 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kube-rbac-proxy-node" containerID="cri-o://eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb" gracePeriod=30 Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.731330 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovn-acl-logging" containerID="cri-o://c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d" gracePeriod=30 Oct 06 15:10:29 crc kubenswrapper[4888]: I1006 15:10:29.787424 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" containerID="cri-o://f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe" gracePeriod=30 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.085205 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/3.log" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.087671 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovn-acl-logging/0.log" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.088178 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovn-controller/0.log" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.088668 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.139690 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-script-lib\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140130 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-bin\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140270 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140329 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140719 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovn-node-metrics-cert\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140850 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-slash\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140880 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-netns\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140906 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-systemd-units\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140932 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phx28\" (UniqueName: \"kubernetes.io/projected/61cf5a40-f739-4ffe-8544-34bcd92aadc1-kube-api-access-phx28\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140951 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-ovn-kubernetes\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.140969 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141003 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-systemd\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141025 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-log-socket\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141088 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-kubelet\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141104 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-node-log\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141137 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-var-lib-openvswitch\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141160 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-env-overrides\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141177 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-ovn\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141191 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-openvswitch\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141209 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-netd\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141227 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-etc-openvswitch\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141260 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-config\") pod \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\" (UID: \"61cf5a40-f739-4ffe-8544-34bcd92aadc1\") " Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141539 4888 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141556 4888 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.141974 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142014 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-slash" (OuterVolumeSpecName: "host-slash") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142033 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142049 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142317 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-node-log" (OuterVolumeSpecName: "node-log") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142345 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142365 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142383 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142403 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-log-socket" (OuterVolumeSpecName: "log-socket") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142401 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142441 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142777 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142822 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142841 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.142860 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.146010 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61cf5a40-f739-4ffe-8544-34bcd92aadc1-kube-api-access-phx28" (OuterVolumeSpecName: "kube-api-access-phx28") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "kube-api-access-phx28". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.146178 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.159520 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6r2rf"] Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160231 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="nbdb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160257 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="nbdb" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160269 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160277 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160287 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovn-acl-logging" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160295 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovn-acl-logging" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160302 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kube-rbac-proxy-node" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160309 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kube-rbac-proxy-node" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160322 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kubecfg-setup" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160328 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kubecfg-setup" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160340 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="sbdb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160347 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="sbdb" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160356 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160364 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160372 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovn-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160379 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovn-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160387 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="northd" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160393 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="northd" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160405 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160413 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160422 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160428 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160441 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160446 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160546 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160554 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="northd" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160563 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovn-acl-logging" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160573 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="nbdb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160581 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160590 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160600 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160608 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovn-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160617 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="sbdb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160629 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kube-rbac-proxy-node" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.160636 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.160994 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.161005 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.161174 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerName="ovnkube-controller" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.163031 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "61cf5a40-f739-4ffe-8544-34bcd92aadc1" (UID: "61cf5a40-f739-4ffe-8544-34bcd92aadc1"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.163167 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.173764 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hw8s9_a8a92e6a-76c9-4370-b509-56d6e41f99de/kube-multus/1.log" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.174956 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hw8s9_a8a92e6a-76c9-4370-b509-56d6e41f99de/kube-multus/0.log" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.174997 4888 generic.go:334] "Generic (PLEG): container finished" podID="a8a92e6a-76c9-4370-b509-56d6e41f99de" containerID="4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2" exitCode=2 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.175064 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hw8s9" event={"ID":"a8a92e6a-76c9-4370-b509-56d6e41f99de","Type":"ContainerDied","Data":"4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.175098 4888 scope.go:117] "RemoveContainer" containerID="fd2c915ff497edd6cc0de69cd7d8582df17f0eff77c930b504218338c26847a8" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.175654 4888 scope.go:117] "RemoveContainer" containerID="4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.175917 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hw8s9_openshift-multus(a8a92e6a-76c9-4370-b509-56d6e41f99de)\"" pod="openshift-multus/multus-hw8s9" podUID="a8a92e6a-76c9-4370-b509-56d6e41f99de" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.178153 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovnkube-controller/3.log" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.180983 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovn-acl-logging/0.log" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.181567 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hzx2q_61cf5a40-f739-4ffe-8544-34bcd92aadc1/ovn-controller/0.log" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.181962 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe" exitCode=0 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.181983 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad" exitCode=0 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.181992 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3" exitCode=0 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182004 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166" exitCode=0 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182014 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79" exitCode=0 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182023 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb" exitCode=0 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182033 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d" exitCode=143 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182041 4888 generic.go:334] "Generic (PLEG): container finished" podID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" containerID="b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9" exitCode=143 Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182061 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182087 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182099 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182110 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182120 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182118 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182130 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182238 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182250 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182255 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182262 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182267 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182273 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182278 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182283 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182289 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182295 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182302 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182311 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182316 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182321 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182327 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182332 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182336 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182342 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182346 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182351 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182356 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182363 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182372 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182378 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182384 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182389 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182394 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182399 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182404 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182410 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182415 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182421 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182428 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hzx2q" event={"ID":"61cf5a40-f739-4ffe-8544-34bcd92aadc1","Type":"ContainerDied","Data":"f54c6fa0aa9c55e2c090b2af2d7612a1d51198c16004b0010c94e5031bd7a89c"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182436 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182442 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182447 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182459 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182464 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182471 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182476 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182481 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182487 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.182492 4888 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.217998 4888 scope.go:117] "RemoveContainer" containerID="f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242379 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-ovn\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242431 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-systemd-units\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242455 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-cni-bin\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242476 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-var-lib-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242516 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-ovnkube-script-lib\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242542 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-cni-netd\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242565 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-slash\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242583 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-log-socket\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242611 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-env-overrides\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242650 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f5215af6-f683-44ac-9e44-744fd05d22ff-ovn-node-metrics-cert\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242669 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-systemd\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242694 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242717 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242738 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-ovnkube-config\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242767 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-kubelet\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242788 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242850 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxdzb\" (UniqueName: \"kubernetes.io/projected/f5215af6-f683-44ac-9e44-744fd05d22ff-kube-api-access-hxdzb\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242883 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-node-log\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242924 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-run-netns\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242948 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-etc-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.242988 4888 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-node-log\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243003 4888 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243016 4888 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243029 4888 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243041 4888 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243052 4888 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243063 4888 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243074 4888 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243085 4888 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243096 4888 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61cf5a40-f739-4ffe-8544-34bcd92aadc1-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243108 4888 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-slash\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243118 4888 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243130 4888 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243142 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phx28\" (UniqueName: \"kubernetes.io/projected/61cf5a40-f739-4ffe-8544-34bcd92aadc1-kube-api-access-phx28\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243154 4888 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243165 4888 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243176 4888 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.243187 4888 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/61cf5a40-f739-4ffe-8544-34bcd92aadc1-log-socket\") on node \"crc\" DevicePath \"\"" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.249885 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.252428 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hzx2q"] Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.259317 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hzx2q"] Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.288990 4888 scope.go:117] "RemoveContainer" containerID="16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.301605 4888 scope.go:117] "RemoveContainer" containerID="642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.312422 4888 scope.go:117] "RemoveContainer" containerID="91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.323110 4888 scope.go:117] "RemoveContainer" containerID="9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.334915 4888 scope.go:117] "RemoveContainer" containerID="eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344691 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-systemd-units\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344723 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-cni-bin\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344740 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-var-lib-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344769 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-ovnkube-script-lib\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344831 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-systemd-units\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344879 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-cni-bin\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344786 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-cni-netd\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344962 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-slash\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344971 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-cni-netd\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344901 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-var-lib-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345030 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-log-socket\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.344984 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-log-socket\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345054 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-slash\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345120 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-env-overrides\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345166 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f5215af6-f683-44ac-9e44-744fd05d22ff-ovn-node-metrics-cert\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345189 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-systemd\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345219 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345250 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345275 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-ovnkube-config\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345321 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-kubelet\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345350 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345393 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxdzb\" (UniqueName: \"kubernetes.io/projected/f5215af6-f683-44ac-9e44-744fd05d22ff-kube-api-access-hxdzb\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345433 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-node-log\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345464 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-ovnkube-script-lib\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345480 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-run-netns\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345513 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-etc-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345534 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-ovn\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345610 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-ovn\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345623 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-env-overrides\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345652 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-kubelet\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345667 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-node-log\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345683 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-run-netns\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345693 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345723 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-etc-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345754 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-systemd\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345875 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f5215af6-f683-44ac-9e44-744fd05d22ff-ovnkube-config\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345917 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-run-openvswitch\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.345943 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5215af6-f683-44ac-9e44-744fd05d22ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.347423 4888 scope.go:117] "RemoveContainer" containerID="c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.348959 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f5215af6-f683-44ac-9e44-744fd05d22ff-ovn-node-metrics-cert\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.357830 4888 scope.go:117] "RemoveContainer" containerID="b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.361590 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxdzb\" (UniqueName: \"kubernetes.io/projected/f5215af6-f683-44ac-9e44-744fd05d22ff-kube-api-access-hxdzb\") pod \"ovnkube-node-6r2rf\" (UID: \"f5215af6-f683-44ac-9e44-744fd05d22ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.369685 4888 scope.go:117] "RemoveContainer" containerID="c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.380189 4888 scope.go:117] "RemoveContainer" containerID="f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.380591 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": container with ID starting with f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe not found: ID does not exist" containerID="f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.380629 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} err="failed to get container status \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": rpc error: code = NotFound desc = could not find container \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": container with ID starting with f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.380652 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.381091 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": container with ID starting with fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc not found: ID does not exist" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.381112 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} err="failed to get container status \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": rpc error: code = NotFound desc = could not find container \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": container with ID starting with fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.381125 4888 scope.go:117] "RemoveContainer" containerID="16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.381466 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": container with ID starting with 16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad not found: ID does not exist" containerID="16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.381486 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} err="failed to get container status \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": rpc error: code = NotFound desc = could not find container \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": container with ID starting with 16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.381505 4888 scope.go:117] "RemoveContainer" containerID="642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.381825 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": container with ID starting with 642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3 not found: ID does not exist" containerID="642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.381855 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} err="failed to get container status \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": rpc error: code = NotFound desc = could not find container \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": container with ID starting with 642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.381878 4888 scope.go:117] "RemoveContainer" containerID="91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.382178 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": container with ID starting with 91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166 not found: ID does not exist" containerID="91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.382198 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} err="failed to get container status \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": rpc error: code = NotFound desc = could not find container \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": container with ID starting with 91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.382221 4888 scope.go:117] "RemoveContainer" containerID="9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.382542 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": container with ID starting with 9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79 not found: ID does not exist" containerID="9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.382574 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} err="failed to get container status \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": rpc error: code = NotFound desc = could not find container \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": container with ID starting with 9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.383862 4888 scope.go:117] "RemoveContainer" containerID="eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.384160 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": container with ID starting with eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb not found: ID does not exist" containerID="eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.384197 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} err="failed to get container status \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": rpc error: code = NotFound desc = could not find container \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": container with ID starting with eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.384216 4888 scope.go:117] "RemoveContainer" containerID="c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.384550 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": container with ID starting with c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d not found: ID does not exist" containerID="c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.384574 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} err="failed to get container status \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": rpc error: code = NotFound desc = could not find container \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": container with ID starting with c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.384594 4888 scope.go:117] "RemoveContainer" containerID="b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.384828 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": container with ID starting with b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9 not found: ID does not exist" containerID="b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.384847 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} err="failed to get container status \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": rpc error: code = NotFound desc = could not find container \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": container with ID starting with b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.384860 4888 scope.go:117] "RemoveContainer" containerID="c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1" Oct 06 15:10:30 crc kubenswrapper[4888]: E1006 15:10:30.385193 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": container with ID starting with c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1 not found: ID does not exist" containerID="c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.385221 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} err="failed to get container status \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": rpc error: code = NotFound desc = could not find container \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": container with ID starting with c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.385245 4888 scope.go:117] "RemoveContainer" containerID="f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.385560 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} err="failed to get container status \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": rpc error: code = NotFound desc = could not find container \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": container with ID starting with f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.385580 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.386058 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} err="failed to get container status \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": rpc error: code = NotFound desc = could not find container \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": container with ID starting with fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.386079 4888 scope.go:117] "RemoveContainer" containerID="16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.386336 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} err="failed to get container status \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": rpc error: code = NotFound desc = could not find container \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": container with ID starting with 16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.386353 4888 scope.go:117] "RemoveContainer" containerID="642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.386652 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} err="failed to get container status \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": rpc error: code = NotFound desc = could not find container \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": container with ID starting with 642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.386672 4888 scope.go:117] "RemoveContainer" containerID="91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.387012 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} err="failed to get container status \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": rpc error: code = NotFound desc = could not find container \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": container with ID starting with 91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.387041 4888 scope.go:117] "RemoveContainer" containerID="9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.387458 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} err="failed to get container status \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": rpc error: code = NotFound desc = could not find container \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": container with ID starting with 9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.387482 4888 scope.go:117] "RemoveContainer" containerID="eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.387742 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} err="failed to get container status \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": rpc error: code = NotFound desc = could not find container \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": container with ID starting with eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.387760 4888 scope.go:117] "RemoveContainer" containerID="c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.388129 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} err="failed to get container status \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": rpc error: code = NotFound desc = could not find container \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": container with ID starting with c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.388147 4888 scope.go:117] "RemoveContainer" containerID="b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.388368 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} err="failed to get container status \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": rpc error: code = NotFound desc = could not find container \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": container with ID starting with b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.388388 4888 scope.go:117] "RemoveContainer" containerID="c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.388648 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} err="failed to get container status \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": rpc error: code = NotFound desc = could not find container \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": container with ID starting with c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.388686 4888 scope.go:117] "RemoveContainer" containerID="f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.389093 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} err="failed to get container status \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": rpc error: code = NotFound desc = could not find container \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": container with ID starting with f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.389120 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.389433 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} err="failed to get container status \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": rpc error: code = NotFound desc = could not find container \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": container with ID starting with fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.389460 4888 scope.go:117] "RemoveContainer" containerID="16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.389674 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} err="failed to get container status \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": rpc error: code = NotFound desc = could not find container \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": container with ID starting with 16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.389698 4888 scope.go:117] "RemoveContainer" containerID="642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.390034 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} err="failed to get container status \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": rpc error: code = NotFound desc = could not find container \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": container with ID starting with 642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.390060 4888 scope.go:117] "RemoveContainer" containerID="91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.390322 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} err="failed to get container status \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": rpc error: code = NotFound desc = could not find container \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": container with ID starting with 91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.390345 4888 scope.go:117] "RemoveContainer" containerID="9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.390612 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} err="failed to get container status \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": rpc error: code = NotFound desc = could not find container \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": container with ID starting with 9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.390630 4888 scope.go:117] "RemoveContainer" containerID="eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.390867 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} err="failed to get container status \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": rpc error: code = NotFound desc = could not find container \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": container with ID starting with eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.390891 4888 scope.go:117] "RemoveContainer" containerID="c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.391091 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} err="failed to get container status \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": rpc error: code = NotFound desc = could not find container \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": container with ID starting with c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.391111 4888 scope.go:117] "RemoveContainer" containerID="b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.391341 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} err="failed to get container status \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": rpc error: code = NotFound desc = could not find container \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": container with ID starting with b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.391361 4888 scope.go:117] "RemoveContainer" containerID="c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.391603 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} err="failed to get container status \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": rpc error: code = NotFound desc = could not find container \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": container with ID starting with c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.391618 4888 scope.go:117] "RemoveContainer" containerID="f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.391874 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe"} err="failed to get container status \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": rpc error: code = NotFound desc = could not find container \"f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe\": container with ID starting with f3c5221774dee8fef0dac13f4fdfd31873dbbbba4851042b5a02d94cfa1428fe not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.391890 4888 scope.go:117] "RemoveContainer" containerID="fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.392108 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc"} err="failed to get container status \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": rpc error: code = NotFound desc = could not find container \"fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc\": container with ID starting with fd0cc223bf7a953f7d68174b8f1e728f825a7e11f93ce348ab34cba2af1ee4cc not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.392125 4888 scope.go:117] "RemoveContainer" containerID="16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.392410 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad"} err="failed to get container status \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": rpc error: code = NotFound desc = could not find container \"16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad\": container with ID starting with 16707580796b70396da6305a8974734c66ee556172923e09e5bda2d0cd6e4fad not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.392430 4888 scope.go:117] "RemoveContainer" containerID="642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.392643 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3"} err="failed to get container status \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": rpc error: code = NotFound desc = could not find container \"642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3\": container with ID starting with 642e43ea320778183eb03414c0eae493402a630adfaca9cec754b3519e9af3f3 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.392659 4888 scope.go:117] "RemoveContainer" containerID="91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.392990 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166"} err="failed to get container status \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": rpc error: code = NotFound desc = could not find container \"91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166\": container with ID starting with 91201f0ce508d78f8a67d3e2be7f307a6d72a11331319b7e2b3dcf90bf81c166 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.393005 4888 scope.go:117] "RemoveContainer" containerID="9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.393205 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79"} err="failed to get container status \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": rpc error: code = NotFound desc = could not find container \"9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79\": container with ID starting with 9ed7c4ebaf091beec4fb65373c20fca590d4a574bd7a2fa50b3e5548219b6f79 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.393221 4888 scope.go:117] "RemoveContainer" containerID="eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.393485 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb"} err="failed to get container status \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": rpc error: code = NotFound desc = could not find container \"eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb\": container with ID starting with eb7ab9a9c3d5b59348eec7a4c061197bdab02dc4b61c07159f3e473a48cd76fb not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.393505 4888 scope.go:117] "RemoveContainer" containerID="c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.393813 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d"} err="failed to get container status \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": rpc error: code = NotFound desc = could not find container \"c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d\": container with ID starting with c28656c9327ca14db2b244fa8f8207093f24147bbe0813c3bd29bb1e0ee1ef6d not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.393829 4888 scope.go:117] "RemoveContainer" containerID="b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.394070 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9"} err="failed to get container status \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": rpc error: code = NotFound desc = could not find container \"b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9\": container with ID starting with b6a802d3a74fe61d159bedb95a4361c421d240b578a931c4b8429c726d1519c9 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.394088 4888 scope.go:117] "RemoveContainer" containerID="c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.394316 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1"} err="failed to get container status \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": rpc error: code = NotFound desc = could not find container \"c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1\": container with ID starting with c9261535315bcee970695a327a68ba7047daf35fc0932144c2946ed477e629b1 not found: ID does not exist" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.475016 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:30 crc kubenswrapper[4888]: I1006 15:10:30.928182 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61cf5a40-f739-4ffe-8544-34bcd92aadc1" path="/var/lib/kubelet/pods/61cf5a40-f739-4ffe-8544-34bcd92aadc1/volumes" Oct 06 15:10:31 crc kubenswrapper[4888]: I1006 15:10:31.187617 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hw8s9_a8a92e6a-76c9-4370-b509-56d6e41f99de/kube-multus/1.log" Oct 06 15:10:31 crc kubenswrapper[4888]: I1006 15:10:31.190111 4888 generic.go:334] "Generic (PLEG): container finished" podID="f5215af6-f683-44ac-9e44-744fd05d22ff" containerID="5b51fb6b234e60b74afeb33573662bf6286d4a16fd049fb45a9417032a571031" exitCode=0 Oct 06 15:10:31 crc kubenswrapper[4888]: I1006 15:10:31.190142 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerDied","Data":"5b51fb6b234e60b74afeb33573662bf6286d4a16fd049fb45a9417032a571031"} Oct 06 15:10:31 crc kubenswrapper[4888]: I1006 15:10:31.190162 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"155850de9cefdd6ca81365501ccd3e16c0efabb67adb839de26cdd035ad6a22a"} Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.197929 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"ef8e5efa155fd38ba8f55fda4a66c6420d23512cc5776cb64cd054e6b4c3afe2"} Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.198502 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"e860779c673a7f104ca895964204c054a195fb7d2821fc95c60926e50bda8823"} Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.198519 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"b0db30f84b19b2edac88c1e235a12386e23fea1033ee3ed0bc80f3f60a6cd03e"} Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.198531 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"e226e8aa5e282779e7955f76bb5d4421c9a01cb9a9af33b65ac998af28ee8e12"} Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.198541 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"62a767f15daed980c1da4f5d8ee01dce584f700596fc7c837200ac8928a36e0f"} Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.198552 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"b8f4a84c220c4586bde155f736dcfb3e22f95534dcbaf01b6a99862ec04f6ca7"} Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.563458 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.563514 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.563554 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.564388 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e98908deb283e6a036eb37ab0790f5913cfd911db2848acf3a6ebbd35a13b160"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:10:32 crc kubenswrapper[4888]: I1006 15:10:32.564445 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://e98908deb283e6a036eb37ab0790f5913cfd911db2848acf3a6ebbd35a13b160" gracePeriod=600 Oct 06 15:10:33 crc kubenswrapper[4888]: I1006 15:10:33.204754 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="e98908deb283e6a036eb37ab0790f5913cfd911db2848acf3a6ebbd35a13b160" exitCode=0 Oct 06 15:10:33 crc kubenswrapper[4888]: I1006 15:10:33.204835 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"e98908deb283e6a036eb37ab0790f5913cfd911db2848acf3a6ebbd35a13b160"} Oct 06 15:10:33 crc kubenswrapper[4888]: I1006 15:10:33.205082 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"2a27765b71e89e6df1e1c89446e393b644a2f95a6e1272b73bf5478141df6f61"} Oct 06 15:10:33 crc kubenswrapper[4888]: I1006 15:10:33.205731 4888 scope.go:117] "RemoveContainer" containerID="0fc88084e8dfb0c728018c0d641727f0db19b908b971796df23c7ddf2d6bca30" Oct 06 15:10:34 crc kubenswrapper[4888]: I1006 15:10:34.215635 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"8487c9a770723b33bd6517741dc1098adfae12eb35f8d4019058b037148e0e59"} Oct 06 15:10:37 crc kubenswrapper[4888]: I1006 15:10:37.243487 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" event={"ID":"f5215af6-f683-44ac-9e44-744fd05d22ff","Type":"ContainerStarted","Data":"cb782108f9813b284890dcdfccbcaf208d51ee2345f5dbed00f926f74ff48ded"} Oct 06 15:10:37 crc kubenswrapper[4888]: I1006 15:10:37.245058 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:37 crc kubenswrapper[4888]: I1006 15:10:37.245087 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:37 crc kubenswrapper[4888]: I1006 15:10:37.245126 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:37 crc kubenswrapper[4888]: I1006 15:10:37.272005 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" podStartSLOduration=7.271985733 podStartE2EDuration="7.271985733s" podCreationTimestamp="2025-10-06 15:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:10:37.271116476 +0000 UTC m=+577.083467204" watchObservedRunningTime="2025-10-06 15:10:37.271985733 +0000 UTC m=+577.084336451" Oct 06 15:10:37 crc kubenswrapper[4888]: I1006 15:10:37.276430 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:37 crc kubenswrapper[4888]: I1006 15:10:37.278393 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:10:42 crc kubenswrapper[4888]: I1006 15:10:42.921444 4888 scope.go:117] "RemoveContainer" containerID="4275f071ce10fcca2346d3403453ef0d290da1985e1671ef7066d9abc889c4c2" Oct 06 15:10:43 crc kubenswrapper[4888]: I1006 15:10:43.283050 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hw8s9_a8a92e6a-76c9-4370-b509-56d6e41f99de/kube-multus/1.log" Oct 06 15:10:43 crc kubenswrapper[4888]: I1006 15:10:43.283457 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hw8s9" event={"ID":"a8a92e6a-76c9-4370-b509-56d6e41f99de","Type":"ContainerStarted","Data":"32552f05cc68db1944718e0e291f430ed447d4ac2b093916767953d1b754d56e"} Oct 06 15:11:00 crc kubenswrapper[4888]: I1006 15:11:00.505924 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6r2rf" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.421680 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q"] Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.424579 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.430699 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.433898 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q"] Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.539609 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.539655 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtx2\" (UniqueName: \"kubernetes.io/projected/27fd59b4-ff18-477e-b242-b7b60574de55-kube-api-access-6rtx2\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.539684 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.641147 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.641267 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.641302 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtx2\" (UniqueName: \"kubernetes.io/projected/27fd59b4-ff18-477e-b242-b7b60574de55-kube-api-access-6rtx2\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.641737 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.641869 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.664255 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtx2\" (UniqueName: \"kubernetes.io/projected/27fd59b4-ff18-477e-b242-b7b60574de55-kube-api-access-6rtx2\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:09 crc kubenswrapper[4888]: I1006 15:11:09.744753 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:10 crc kubenswrapper[4888]: I1006 15:11:10.129494 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q"] Oct 06 15:11:10 crc kubenswrapper[4888]: I1006 15:11:10.419546 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" event={"ID":"27fd59b4-ff18-477e-b242-b7b60574de55","Type":"ContainerStarted","Data":"378b2f35334b691bb568452b39c5ae4a0c76aeb70d5584497d284e58ed9614f2"} Oct 06 15:11:10 crc kubenswrapper[4888]: I1006 15:11:10.419585 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" event={"ID":"27fd59b4-ff18-477e-b242-b7b60574de55","Type":"ContainerStarted","Data":"c19a445706b17b608a6fb64ee14cd791751c37def7de2a2b231cc98ce2196ea1"} Oct 06 15:11:11 crc kubenswrapper[4888]: I1006 15:11:11.436981 4888 generic.go:334] "Generic (PLEG): container finished" podID="27fd59b4-ff18-477e-b242-b7b60574de55" containerID="378b2f35334b691bb568452b39c5ae4a0c76aeb70d5584497d284e58ed9614f2" exitCode=0 Oct 06 15:11:11 crc kubenswrapper[4888]: I1006 15:11:11.437025 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" event={"ID":"27fd59b4-ff18-477e-b242-b7b60574de55","Type":"ContainerDied","Data":"378b2f35334b691bb568452b39c5ae4a0c76aeb70d5584497d284e58ed9614f2"} Oct 06 15:11:13 crc kubenswrapper[4888]: I1006 15:11:13.450228 4888 generic.go:334] "Generic (PLEG): container finished" podID="27fd59b4-ff18-477e-b242-b7b60574de55" containerID="a7b0543b6acbbfe51681b1bc20bd44c5c39992473734c6cb6529758e0fe82073" exitCode=0 Oct 06 15:11:13 crc kubenswrapper[4888]: I1006 15:11:13.450559 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" event={"ID":"27fd59b4-ff18-477e-b242-b7b60574de55","Type":"ContainerDied","Data":"a7b0543b6acbbfe51681b1bc20bd44c5c39992473734c6cb6529758e0fe82073"} Oct 06 15:11:14 crc kubenswrapper[4888]: I1006 15:11:14.459752 4888 generic.go:334] "Generic (PLEG): container finished" podID="27fd59b4-ff18-477e-b242-b7b60574de55" containerID="f5bae93193ec58ade89f9de9e8e4735062a42b3226c4149a128aebecfdc0e00e" exitCode=0 Oct 06 15:11:14 crc kubenswrapper[4888]: I1006 15:11:14.459863 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" event={"ID":"27fd59b4-ff18-477e-b242-b7b60574de55","Type":"ContainerDied","Data":"f5bae93193ec58ade89f9de9e8e4735062a42b3226c4149a128aebecfdc0e00e"} Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.681882 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.714373 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rtx2\" (UniqueName: \"kubernetes.io/projected/27fd59b4-ff18-477e-b242-b7b60574de55-kube-api-access-6rtx2\") pod \"27fd59b4-ff18-477e-b242-b7b60574de55\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.714635 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-bundle\") pod \"27fd59b4-ff18-477e-b242-b7b60574de55\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.714705 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-util\") pod \"27fd59b4-ff18-477e-b242-b7b60574de55\" (UID: \"27fd59b4-ff18-477e-b242-b7b60574de55\") " Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.716130 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-bundle" (OuterVolumeSpecName: "bundle") pod "27fd59b4-ff18-477e-b242-b7b60574de55" (UID: "27fd59b4-ff18-477e-b242-b7b60574de55"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.722084 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27fd59b4-ff18-477e-b242-b7b60574de55-kube-api-access-6rtx2" (OuterVolumeSpecName: "kube-api-access-6rtx2") pod "27fd59b4-ff18-477e-b242-b7b60574de55" (UID: "27fd59b4-ff18-477e-b242-b7b60574de55"). InnerVolumeSpecName "kube-api-access-6rtx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.816729 4888 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.816773 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rtx2\" (UniqueName: \"kubernetes.io/projected/27fd59b4-ff18-477e-b242-b7b60574de55-kube-api-access-6rtx2\") on node \"crc\" DevicePath \"\"" Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.899330 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-util" (OuterVolumeSpecName: "util") pod "27fd59b4-ff18-477e-b242-b7b60574de55" (UID: "27fd59b4-ff18-477e-b242-b7b60574de55"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:11:15 crc kubenswrapper[4888]: I1006 15:11:15.917422 4888 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/27fd59b4-ff18-477e-b242-b7b60574de55-util\") on node \"crc\" DevicePath \"\"" Oct 06 15:11:16 crc kubenswrapper[4888]: I1006 15:11:16.472890 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" event={"ID":"27fd59b4-ff18-477e-b242-b7b60574de55","Type":"ContainerDied","Data":"c19a445706b17b608a6fb64ee14cd791751c37def7de2a2b231cc98ce2196ea1"} Oct 06 15:11:16 crc kubenswrapper[4888]: I1006 15:11:16.473255 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c19a445706b17b608a6fb64ee14cd791751c37def7de2a2b231cc98ce2196ea1" Oct 06 15:11:16 crc kubenswrapper[4888]: I1006 15:11:16.472974 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.709973 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-twx77"] Oct 06 15:11:18 crc kubenswrapper[4888]: E1006 15:11:18.710157 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd59b4-ff18-477e-b242-b7b60574de55" containerName="pull" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.710167 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd59b4-ff18-477e-b242-b7b60574de55" containerName="pull" Oct 06 15:11:18 crc kubenswrapper[4888]: E1006 15:11:18.710182 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd59b4-ff18-477e-b242-b7b60574de55" containerName="util" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.710187 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd59b4-ff18-477e-b242-b7b60574de55" containerName="util" Oct 06 15:11:18 crc kubenswrapper[4888]: E1006 15:11:18.710199 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27fd59b4-ff18-477e-b242-b7b60574de55" containerName="extract" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.710207 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="27fd59b4-ff18-477e-b242-b7b60574de55" containerName="extract" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.710298 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="27fd59b4-ff18-477e-b242-b7b60574de55" containerName="extract" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.710617 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-twx77" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.713748 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.714500 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.714683 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-twbd6" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.729288 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-twx77"] Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.756952 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jbrp\" (UniqueName: \"kubernetes.io/projected/6156a5b3-4323-433b-bde7-ad0bebee7659-kube-api-access-2jbrp\") pod \"nmstate-operator-858ddd8f98-twx77\" (UID: \"6156a5b3-4323-433b-bde7-ad0bebee7659\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-twx77" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.858367 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jbrp\" (UniqueName: \"kubernetes.io/projected/6156a5b3-4323-433b-bde7-ad0bebee7659-kube-api-access-2jbrp\") pod \"nmstate-operator-858ddd8f98-twx77\" (UID: \"6156a5b3-4323-433b-bde7-ad0bebee7659\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-twx77" Oct 06 15:11:18 crc kubenswrapper[4888]: I1006 15:11:18.880790 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jbrp\" (UniqueName: \"kubernetes.io/projected/6156a5b3-4323-433b-bde7-ad0bebee7659-kube-api-access-2jbrp\") pod \"nmstate-operator-858ddd8f98-twx77\" (UID: \"6156a5b3-4323-433b-bde7-ad0bebee7659\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-twx77" Oct 06 15:11:19 crc kubenswrapper[4888]: I1006 15:11:19.034744 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-twx77" Oct 06 15:11:19 crc kubenswrapper[4888]: I1006 15:11:19.263869 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-twx77"] Oct 06 15:11:19 crc kubenswrapper[4888]: I1006 15:11:19.502665 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-twx77" event={"ID":"6156a5b3-4323-433b-bde7-ad0bebee7659","Type":"ContainerStarted","Data":"b8f5e5485554daa87bf38662beb3a1b84ce5e121dbfd7d5e339a1d5fd0faac12"} Oct 06 15:11:22 crc kubenswrapper[4888]: I1006 15:11:22.515837 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-twx77" event={"ID":"6156a5b3-4323-433b-bde7-ad0bebee7659","Type":"ContainerStarted","Data":"32db4d2789a94f28d4ba7972a4e1e5b7ef35487ff12c6d3d7f1b8f686b4a6e05"} Oct 06 15:11:22 crc kubenswrapper[4888]: I1006 15:11:22.532098 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-twx77" podStartSLOduration=2.316724286 podStartE2EDuration="4.53208097s" podCreationTimestamp="2025-10-06 15:11:18 +0000 UTC" firstStartedPulling="2025-10-06 15:11:19.271320867 +0000 UTC m=+619.083671585" lastFinishedPulling="2025-10-06 15:11:21.486677551 +0000 UTC m=+621.299028269" observedRunningTime="2025-10-06 15:11:22.531228787 +0000 UTC m=+622.343579505" watchObservedRunningTime="2025-10-06 15:11:22.53208097 +0000 UTC m=+622.344431688" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.040553 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.042000 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.048513 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5gdp9" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.064765 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-pv582"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.065592 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.067781 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.070150 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.076921 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-pv582"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.103920 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9nd\" (UniqueName: \"kubernetes.io/projected/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-kube-api-access-fc9nd\") pod \"nmstate-webhook-6cdbc54649-pv582\" (UID: \"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.103982 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4492\" (UniqueName: \"kubernetes.io/projected/559ac274-e135-4305-8659-c0d3d3e0a832-kube-api-access-t4492\") pod \"nmstate-metrics-fdff9cb8d-f7sj6\" (UID: \"559ac274-e135-4305-8659-c0d3d3e0a832\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.104322 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-pv582\" (UID: \"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.106059 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8j27m"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.106736 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.205546 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc9nd\" (UniqueName: \"kubernetes.io/projected/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-kube-api-access-fc9nd\") pod \"nmstate-webhook-6cdbc54649-pv582\" (UID: \"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.205594 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4492\" (UniqueName: \"kubernetes.io/projected/559ac274-e135-4305-8659-c0d3d3e0a832-kube-api-access-t4492\") pod \"nmstate-metrics-fdff9cb8d-f7sj6\" (UID: \"559ac274-e135-4305-8659-c0d3d3e0a832\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.205619 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-ovs-socket\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.205640 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-nmstate-lock\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.205670 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtxbp\" (UniqueName: \"kubernetes.io/projected/f5de385b-9a6a-4dcd-819b-35500a617b29-kube-api-access-mtxbp\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.205690 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-dbus-socket\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.205862 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-pv582\" (UID: \"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:30 crc kubenswrapper[4888]: E1006 15:11:30.206018 4888 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 06 15:11:30 crc kubenswrapper[4888]: E1006 15:11:30.206062 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-tls-key-pair podName:c8df3027-9c1a-48ab-bfd6-5a6cb0753f95 nodeName:}" failed. No retries permitted until 2025-10-06 15:11:30.706046241 +0000 UTC m=+630.518396949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-tls-key-pair") pod "nmstate-webhook-6cdbc54649-pv582" (UID: "c8df3027-9c1a-48ab-bfd6-5a6cb0753f95") : secret "openshift-nmstate-webhook" not found Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.220995 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.221607 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.223601 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.223627 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cwxh9" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.225890 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.239634 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.250720 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc9nd\" (UniqueName: \"kubernetes.io/projected/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-kube-api-access-fc9nd\") pod \"nmstate-webhook-6cdbc54649-pv582\" (UID: \"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.254763 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4492\" (UniqueName: \"kubernetes.io/projected/559ac274-e135-4305-8659-c0d3d3e0a832-kube-api-access-t4492\") pod \"nmstate-metrics-fdff9cb8d-f7sj6\" (UID: \"559ac274-e135-4305-8659-c0d3d3e0a832\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306604 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtxbp\" (UniqueName: \"kubernetes.io/projected/f5de385b-9a6a-4dcd-819b-35500a617b29-kube-api-access-mtxbp\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306663 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-dbus-socket\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306702 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mgtf\" (UniqueName: \"kubernetes.io/projected/a8ba4133-701b-48a4-a066-5ef4e96ca10d-kube-api-access-4mgtf\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306757 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ba4133-701b-48a4-a066-5ef4e96ca10d-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306857 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a8ba4133-701b-48a4-a066-5ef4e96ca10d-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306885 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-ovs-socket\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306910 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-nmstate-lock\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306979 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-dbus-socket\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.307006 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-nmstate-lock\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.306979 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f5de385b-9a6a-4dcd-819b-35500a617b29-ovs-socket\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.332760 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtxbp\" (UniqueName: \"kubernetes.io/projected/f5de385b-9a6a-4dcd-819b-35500a617b29-kube-api-access-mtxbp\") pod \"nmstate-handler-8j27m\" (UID: \"f5de385b-9a6a-4dcd-819b-35500a617b29\") " pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.358959 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.407597 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a8ba4133-701b-48a4-a066-5ef4e96ca10d-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.407962 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mgtf\" (UniqueName: \"kubernetes.io/projected/a8ba4133-701b-48a4-a066-5ef4e96ca10d-kube-api-access-4mgtf\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.408013 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ba4133-701b-48a4-a066-5ef4e96ca10d-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: E1006 15:11:30.408126 4888 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Oct 06 15:11:30 crc kubenswrapper[4888]: E1006 15:11:30.408179 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8ba4133-701b-48a4-a066-5ef4e96ca10d-plugin-serving-cert podName:a8ba4133-701b-48a4-a066-5ef4e96ca10d nodeName:}" failed. No retries permitted until 2025-10-06 15:11:30.908162876 +0000 UTC m=+630.720513594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a8ba4133-701b-48a4-a066-5ef4e96ca10d-plugin-serving-cert") pod "nmstate-console-plugin-6b874cbd85-vr9q4" (UID: "a8ba4133-701b-48a4-a066-5ef4e96ca10d") : secret "plugin-serving-cert" not found Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.408660 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a8ba4133-701b-48a4-a066-5ef4e96ca10d-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.418657 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.434533 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mgtf\" (UniqueName: \"kubernetes.io/projected/a8ba4133-701b-48a4-a066-5ef4e96ca10d-kube-api-access-4mgtf\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.437657 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85f54b89f6-h7969"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.438255 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.454054 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85f54b89f6-h7969"] Oct 06 15:11:30 crc kubenswrapper[4888]: W1006 15:11:30.466321 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5de385b_9a6a_4dcd_819b_35500a617b29.slice/crio-e47cb898d01c6a11bff667c42bcaa5b05849914763338d229cf895d7ba3677d6 WatchSource:0}: Error finding container e47cb898d01c6a11bff667c42bcaa5b05849914763338d229cf895d7ba3677d6: Status 404 returned error can't find the container with id e47cb898d01c6a11bff667c42bcaa5b05849914763338d229cf895d7ba3677d6 Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.508668 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7809e15c-e55d-4324-99af-da2dd12c212a-console-serving-cert\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.508706 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-trusted-ca-bundle\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.508934 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-console-config\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.509044 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d986b\" (UniqueName: \"kubernetes.io/projected/7809e15c-e55d-4324-99af-da2dd12c212a-kube-api-access-d986b\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.509106 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-service-ca\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.509289 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7809e15c-e55d-4324-99af-da2dd12c212a-console-oauth-config\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.509417 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-oauth-serving-cert\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.558410 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8j27m" event={"ID":"f5de385b-9a6a-4dcd-819b-35500a617b29","Type":"ContainerStarted","Data":"e47cb898d01c6a11bff667c42bcaa5b05849914763338d229cf895d7ba3677d6"} Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.610451 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7809e15c-e55d-4324-99af-da2dd12c212a-console-serving-cert\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.610500 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-trusted-ca-bundle\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.610548 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-console-config\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.610575 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d986b\" (UniqueName: \"kubernetes.io/projected/7809e15c-e55d-4324-99af-da2dd12c212a-kube-api-access-d986b\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.610592 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-service-ca\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.610615 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7809e15c-e55d-4324-99af-da2dd12c212a-console-oauth-config\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.610636 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-oauth-serving-cert\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.611575 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-oauth-serving-cert\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.615103 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-trusted-ca-bundle\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.616145 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-console-config\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.616880 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7809e15c-e55d-4324-99af-da2dd12c212a-service-ca\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.620462 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7809e15c-e55d-4324-99af-da2dd12c212a-console-serving-cert\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.621398 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7809e15c-e55d-4324-99af-da2dd12c212a-console-oauth-config\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.627851 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6"] Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.635228 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d986b\" (UniqueName: \"kubernetes.io/projected/7809e15c-e55d-4324-99af-da2dd12c212a-kube-api-access-d986b\") pod \"console-85f54b89f6-h7969\" (UID: \"7809e15c-e55d-4324-99af-da2dd12c212a\") " pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.712101 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-pv582\" (UID: \"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.716362 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c8df3027-9c1a-48ab-bfd6-5a6cb0753f95-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-pv582\" (UID: \"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.763689 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.915532 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ba4133-701b-48a4-a066-5ef4e96ca10d-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.921345 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8ba4133-701b-48a4-a066-5ef4e96ca10d-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-vr9q4\" (UID: \"a8ba4133-701b-48a4-a066-5ef4e96ca10d\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.966240 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85f54b89f6-h7969"] Oct 06 15:11:30 crc kubenswrapper[4888]: W1006 15:11:30.969772 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7809e15c_e55d_4324_99af_da2dd12c212a.slice/crio-c94c4ea7abd196a1c2a43f734ac938349cad38fda88cb2e2beafdca537045b6a WatchSource:0}: Error finding container c94c4ea7abd196a1c2a43f734ac938349cad38fda88cb2e2beafdca537045b6a: Status 404 returned error can't find the container with id c94c4ea7abd196a1c2a43f734ac938349cad38fda88cb2e2beafdca537045b6a Oct 06 15:11:30 crc kubenswrapper[4888]: I1006 15:11:30.979407 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.139273 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.176757 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-pv582"] Oct 06 15:11:31 crc kubenswrapper[4888]: W1006 15:11:31.184591 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8df3027_9c1a_48ab_bfd6_5a6cb0753f95.slice/crio-f3d1167bd48f6c6c61c33232f99e223cc21f1abb93819257ccbba5c6153b8c5f WatchSource:0}: Error finding container f3d1167bd48f6c6c61c33232f99e223cc21f1abb93819257ccbba5c6153b8c5f: Status 404 returned error can't find the container with id f3d1167bd48f6c6c61c33232f99e223cc21f1abb93819257ccbba5c6153b8c5f Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.361569 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4"] Oct 06 15:11:31 crc kubenswrapper[4888]: W1006 15:11:31.369313 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ba4133_701b_48a4_a066_5ef4e96ca10d.slice/crio-e7c4f5f3a13deade92ae8d0c67932caf6249c94eb6385866bd72638315405d07 WatchSource:0}: Error finding container e7c4f5f3a13deade92ae8d0c67932caf6249c94eb6385866bd72638315405d07: Status 404 returned error can't find the container with id e7c4f5f3a13deade92ae8d0c67932caf6249c94eb6385866bd72638315405d07 Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.568141 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" event={"ID":"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95","Type":"ContainerStarted","Data":"f3d1167bd48f6c6c61c33232f99e223cc21f1abb93819257ccbba5c6153b8c5f"} Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.570354 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85f54b89f6-h7969" event={"ID":"7809e15c-e55d-4324-99af-da2dd12c212a","Type":"ContainerStarted","Data":"70d0921ec742cf605c78c5e5948cd06c049e271f4ad9c0627c197ae17cd4a555"} Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.570396 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85f54b89f6-h7969" event={"ID":"7809e15c-e55d-4324-99af-da2dd12c212a","Type":"ContainerStarted","Data":"c94c4ea7abd196a1c2a43f734ac938349cad38fda88cb2e2beafdca537045b6a"} Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.572922 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" event={"ID":"559ac274-e135-4305-8659-c0d3d3e0a832","Type":"ContainerStarted","Data":"6d5190d580ce81a81a3fd72396ba0ebba01704e521c7f82195bcc12e7313c532"} Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.576263 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" event={"ID":"a8ba4133-701b-48a4-a066-5ef4e96ca10d","Type":"ContainerStarted","Data":"e7c4f5f3a13deade92ae8d0c67932caf6249c94eb6385866bd72638315405d07"} Oct 06 15:11:31 crc kubenswrapper[4888]: I1006 15:11:31.595121 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85f54b89f6-h7969" podStartSLOduration=1.595106027 podStartE2EDuration="1.595106027s" podCreationTimestamp="2025-10-06 15:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:11:31.592624982 +0000 UTC m=+631.404975700" watchObservedRunningTime="2025-10-06 15:11:31.595106027 +0000 UTC m=+631.407456745" Oct 06 15:11:33 crc kubenswrapper[4888]: I1006 15:11:33.588297 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" event={"ID":"559ac274-e135-4305-8659-c0d3d3e0a832","Type":"ContainerStarted","Data":"82094a6c55a7f766a31fda64a4d869f8d759c00e9d516d8e06ee8e5588b2b093"} Oct 06 15:11:33 crc kubenswrapper[4888]: I1006 15:11:33.590574 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8j27m" event={"ID":"f5de385b-9a6a-4dcd-819b-35500a617b29","Type":"ContainerStarted","Data":"6d248694567bb3d62b469fa4a89843b8299b7ec78bf7038494583995c5768b37"} Oct 06 15:11:33 crc kubenswrapper[4888]: I1006 15:11:33.590653 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:33 crc kubenswrapper[4888]: I1006 15:11:33.596716 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" event={"ID":"c8df3027-9c1a-48ab-bfd6-5a6cb0753f95","Type":"ContainerStarted","Data":"7f17ad02cb23a89e58239330b066708539e9df97143ab2e02095135b09a59852"} Oct 06 15:11:33 crc kubenswrapper[4888]: I1006 15:11:33.596960 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:11:33 crc kubenswrapper[4888]: I1006 15:11:33.623255 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8j27m" podStartSLOduration=1.03619291 podStartE2EDuration="3.623237068s" podCreationTimestamp="2025-10-06 15:11:30 +0000 UTC" firstStartedPulling="2025-10-06 15:11:30.469255002 +0000 UTC m=+630.281605720" lastFinishedPulling="2025-10-06 15:11:33.05629916 +0000 UTC m=+632.868649878" observedRunningTime="2025-10-06 15:11:33.606790356 +0000 UTC m=+633.419141074" watchObservedRunningTime="2025-10-06 15:11:33.623237068 +0000 UTC m=+633.435587796" Oct 06 15:11:33 crc kubenswrapper[4888]: I1006 15:11:33.626915 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" podStartSLOduration=1.756985164 podStartE2EDuration="3.626904854s" podCreationTimestamp="2025-10-06 15:11:30 +0000 UTC" firstStartedPulling="2025-10-06 15:11:31.188612898 +0000 UTC m=+631.000963616" lastFinishedPulling="2025-10-06 15:11:33.058532588 +0000 UTC m=+632.870883306" observedRunningTime="2025-10-06 15:11:33.622716204 +0000 UTC m=+633.435066922" watchObservedRunningTime="2025-10-06 15:11:33.626904854 +0000 UTC m=+633.439255582" Oct 06 15:11:35 crc kubenswrapper[4888]: I1006 15:11:35.614518 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" event={"ID":"a8ba4133-701b-48a4-a066-5ef4e96ca10d","Type":"ContainerStarted","Data":"b9c6ac8e5d12bf6854c883a1e9b558bcda1b86ac30a4f0fba594cf2fbda45f90"} Oct 06 15:11:35 crc kubenswrapper[4888]: I1006 15:11:35.631926 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-vr9q4" podStartSLOduration=1.839508224 podStartE2EDuration="5.631906827s" podCreationTimestamp="2025-10-06 15:11:30 +0000 UTC" firstStartedPulling="2025-10-06 15:11:31.372179135 +0000 UTC m=+631.184529853" lastFinishedPulling="2025-10-06 15:11:35.164577698 +0000 UTC m=+634.976928456" observedRunningTime="2025-10-06 15:11:35.627788829 +0000 UTC m=+635.440139567" watchObservedRunningTime="2025-10-06 15:11:35.631906827 +0000 UTC m=+635.444257545" Oct 06 15:11:36 crc kubenswrapper[4888]: I1006 15:11:36.622829 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" event={"ID":"559ac274-e135-4305-8659-c0d3d3e0a832","Type":"ContainerStarted","Data":"ca82e97ce83892698e27a71cf63290629ae644ec8d98d7514b46d9b4d046e5c0"} Oct 06 15:11:40 crc kubenswrapper[4888]: I1006 15:11:40.445417 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8j27m" Oct 06 15:11:40 crc kubenswrapper[4888]: I1006 15:11:40.464598 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-f7sj6" podStartSLOduration=4.890249434 podStartE2EDuration="10.464577444s" podCreationTimestamp="2025-10-06 15:11:30 +0000 UTC" firstStartedPulling="2025-10-06 15:11:30.637368363 +0000 UTC m=+630.449719081" lastFinishedPulling="2025-10-06 15:11:36.211696383 +0000 UTC m=+636.024047091" observedRunningTime="2025-10-06 15:11:36.648767116 +0000 UTC m=+636.461117834" watchObservedRunningTime="2025-10-06 15:11:40.464577444 +0000 UTC m=+640.276928162" Oct 06 15:11:40 crc kubenswrapper[4888]: I1006 15:11:40.764380 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:40 crc kubenswrapper[4888]: I1006 15:11:40.764456 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:40 crc kubenswrapper[4888]: I1006 15:11:40.770747 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:41 crc kubenswrapper[4888]: I1006 15:11:41.655228 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85f54b89f6-h7969" Oct 06 15:11:41 crc kubenswrapper[4888]: I1006 15:11:41.728594 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vrp9q"] Oct 06 15:11:50 crc kubenswrapper[4888]: I1006 15:11:50.984961 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-pv582" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.571269 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr"] Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.573536 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.575461 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.578952 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr"] Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.656988 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.657123 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmmz\" (UniqueName: \"kubernetes.io/projected/613a43a7-c01b-481f-bd3f-387d90df61ec-kube-api-access-ljmmz\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.657180 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.758786 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmmz\" (UniqueName: \"kubernetes.io/projected/613a43a7-c01b-481f-bd3f-387d90df61ec-kube-api-access-ljmmz\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.758871 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.758925 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.759361 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.759411 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.778609 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmmz\" (UniqueName: \"kubernetes.io/projected/613a43a7-c01b-481f-bd3f-387d90df61ec-kube-api-access-ljmmz\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:03 crc kubenswrapper[4888]: I1006 15:12:03.904163 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:04 crc kubenswrapper[4888]: I1006 15:12:04.299972 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr"] Oct 06 15:12:04 crc kubenswrapper[4888]: I1006 15:12:04.775267 4888 generic.go:334] "Generic (PLEG): container finished" podID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerID="70a2cec06400b60e9f885d3694e8d72d4a7ff0a3d9974e629f7aa1c313340ba1" exitCode=0 Oct 06 15:12:04 crc kubenswrapper[4888]: I1006 15:12:04.775380 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" event={"ID":"613a43a7-c01b-481f-bd3f-387d90df61ec","Type":"ContainerDied","Data":"70a2cec06400b60e9f885d3694e8d72d4a7ff0a3d9974e629f7aa1c313340ba1"} Oct 06 15:12:04 crc kubenswrapper[4888]: I1006 15:12:04.775568 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" event={"ID":"613a43a7-c01b-481f-bd3f-387d90df61ec","Type":"ContainerStarted","Data":"da7abb2f611c1012a3b110591ae3a55b73a8f8461e69e49da551d965c8bca07e"} Oct 06 15:12:06 crc kubenswrapper[4888]: I1006 15:12:06.783001 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vrp9q" podUID="20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" containerName="console" containerID="cri-o://4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1" gracePeriod=15 Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.110495 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vrp9q_20dd2c4e-8a25-4494-a69e-4ee7ef46fa39/console/0.log" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.110762 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.202751 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-config\") pod \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.202857 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-serving-cert\") pod \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.202922 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-service-ca\") pod \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.202951 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-oauth-serving-cert\") pod \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.202975 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db8md\" (UniqueName: \"kubernetes.io/projected/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-kube-api-access-db8md\") pod \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.203012 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-trusted-ca-bundle\") pod \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.203049 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-oauth-config\") pod \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\" (UID: \"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39\") " Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.203576 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-config" (OuterVolumeSpecName: "console-config") pod "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" (UID: "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.203592 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-service-ca" (OuterVolumeSpecName: "service-ca") pod "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" (UID: "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.203666 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" (UID: "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.203846 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" (UID: "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.204032 4888 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.204046 4888 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.204054 4888 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.204062 4888 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.212341 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" (UID: "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.212608 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" (UID: "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.213148 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-kube-api-access-db8md" (OuterVolumeSpecName: "kube-api-access-db8md") pod "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" (UID: "20dd2c4e-8a25-4494-a69e-4ee7ef46fa39"). InnerVolumeSpecName "kube-api-access-db8md". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.305493 4888 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.305549 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db8md\" (UniqueName: \"kubernetes.io/projected/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-kube-api-access-db8md\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.305594 4888 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.792300 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vrp9q_20dd2c4e-8a25-4494-a69e-4ee7ef46fa39/console/0.log" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.792353 4888 generic.go:334] "Generic (PLEG): container finished" podID="20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" containerID="4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1" exitCode=2 Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.792420 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vrp9q" event={"ID":"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39","Type":"ContainerDied","Data":"4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1"} Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.792424 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vrp9q" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.792554 4888 scope.go:117] "RemoveContainer" containerID="4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.792638 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vrp9q" event={"ID":"20dd2c4e-8a25-4494-a69e-4ee7ef46fa39","Type":"ContainerDied","Data":"382e607b01d1926bdb7c53489613bb1baac83a0c3f0969e86e59d3032eb47974"} Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.795633 4888 generic.go:334] "Generic (PLEG): container finished" podID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerID="1caa145f3891b585b36463079f75c84d6c3d0595b57a42a250deb0ec2e2d051a" exitCode=0 Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.795673 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" event={"ID":"613a43a7-c01b-481f-bd3f-387d90df61ec","Type":"ContainerDied","Data":"1caa145f3891b585b36463079f75c84d6c3d0595b57a42a250deb0ec2e2d051a"} Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.820423 4888 scope.go:117] "RemoveContainer" containerID="4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1" Oct 06 15:12:07 crc kubenswrapper[4888]: E1006 15:12:07.820943 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1\": container with ID starting with 4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1 not found: ID does not exist" containerID="4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.820982 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1"} err="failed to get container status \"4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1\": rpc error: code = NotFound desc = could not find container \"4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1\": container with ID starting with 4eaed1c9f101d79cbfcf85e2c65e2ee6ae31cb47fe920228cd5e47f9010ab5e1 not found: ID does not exist" Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.842989 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vrp9q"] Oct 06 15:12:07 crc kubenswrapper[4888]: I1006 15:12:07.847480 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vrp9q"] Oct 06 15:12:08 crc kubenswrapper[4888]: I1006 15:12:08.803897 4888 generic.go:334] "Generic (PLEG): container finished" podID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerID="c74eafa6168b1c39fb5d8995cca708324c86aa4ad5e4214022a2cd1bb2b8e45c" exitCode=0 Oct 06 15:12:08 crc kubenswrapper[4888]: I1006 15:12:08.804295 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" event={"ID":"613a43a7-c01b-481f-bd3f-387d90df61ec","Type":"ContainerDied","Data":"c74eafa6168b1c39fb5d8995cca708324c86aa4ad5e4214022a2cd1bb2b8e45c"} Oct 06 15:12:08 crc kubenswrapper[4888]: I1006 15:12:08.928749 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" path="/var/lib/kubelet/pods/20dd2c4e-8a25-4494-a69e-4ee7ef46fa39/volumes" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.017539 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.138834 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljmmz\" (UniqueName: \"kubernetes.io/projected/613a43a7-c01b-481f-bd3f-387d90df61ec-kube-api-access-ljmmz\") pod \"613a43a7-c01b-481f-bd3f-387d90df61ec\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.138890 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-util\") pod \"613a43a7-c01b-481f-bd3f-387d90df61ec\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.139019 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-bundle\") pod \"613a43a7-c01b-481f-bd3f-387d90df61ec\" (UID: \"613a43a7-c01b-481f-bd3f-387d90df61ec\") " Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.140112 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-bundle" (OuterVolumeSpecName: "bundle") pod "613a43a7-c01b-481f-bd3f-387d90df61ec" (UID: "613a43a7-c01b-481f-bd3f-387d90df61ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.145135 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613a43a7-c01b-481f-bd3f-387d90df61ec-kube-api-access-ljmmz" (OuterVolumeSpecName: "kube-api-access-ljmmz") pod "613a43a7-c01b-481f-bd3f-387d90df61ec" (UID: "613a43a7-c01b-481f-bd3f-387d90df61ec"). InnerVolumeSpecName "kube-api-access-ljmmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.155958 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-util" (OuterVolumeSpecName: "util") pod "613a43a7-c01b-481f-bd3f-387d90df61ec" (UID: "613a43a7-c01b-481f-bd3f-387d90df61ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.240667 4888 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.240705 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljmmz\" (UniqueName: \"kubernetes.io/projected/613a43a7-c01b-481f-bd3f-387d90df61ec-kube-api-access-ljmmz\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.240718 4888 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/613a43a7-c01b-481f-bd3f-387d90df61ec-util\") on node \"crc\" DevicePath \"\"" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.815893 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" event={"ID":"613a43a7-c01b-481f-bd3f-387d90df61ec","Type":"ContainerDied","Data":"da7abb2f611c1012a3b110591ae3a55b73a8f8461e69e49da551d965c8bca07e"} Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.815932 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr" Oct 06 15:12:10 crc kubenswrapper[4888]: I1006 15:12:10.815936 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da7abb2f611c1012a3b110591ae3a55b73a8f8461e69e49da551d965c8bca07e" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.504599 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6"] Oct 06 15:12:18 crc kubenswrapper[4888]: E1006 15:12:18.506474 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerName="extract" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.506565 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerName="extract" Oct 06 15:12:18 crc kubenswrapper[4888]: E1006 15:12:18.506641 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerName="util" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.506701 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerName="util" Oct 06 15:12:18 crc kubenswrapper[4888]: E1006 15:12:18.506770 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" containerName="console" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.506850 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" containerName="console" Oct 06 15:12:18 crc kubenswrapper[4888]: E1006 15:12:18.506941 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerName="pull" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.507025 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerName="pull" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.507219 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="20dd2c4e-8a25-4494-a69e-4ee7ef46fa39" containerName="console" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.507299 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="613a43a7-c01b-481f-bd3f-387d90df61ec" containerName="extract" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.507900 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.512733 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.512765 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-57m4p" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.513035 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.515241 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.520846 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.581581 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6"] Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.661671 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f42e180-91fb-42d0-bf64-66c83c63001b-apiservice-cert\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.661807 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f42e180-91fb-42d0-bf64-66c83c63001b-webhook-cert\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.661904 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvmwf\" (UniqueName: \"kubernetes.io/projected/5f42e180-91fb-42d0-bf64-66c83c63001b-kube-api-access-vvmwf\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.763544 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f42e180-91fb-42d0-bf64-66c83c63001b-apiservice-cert\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.763610 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f42e180-91fb-42d0-bf64-66c83c63001b-webhook-cert\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.763655 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvmwf\" (UniqueName: \"kubernetes.io/projected/5f42e180-91fb-42d0-bf64-66c83c63001b-kube-api-access-vvmwf\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.773010 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f42e180-91fb-42d0-bf64-66c83c63001b-webhook-cert\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.785436 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f42e180-91fb-42d0-bf64-66c83c63001b-apiservice-cert\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.787749 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvmwf\" (UniqueName: \"kubernetes.io/projected/5f42e180-91fb-42d0-bf64-66c83c63001b-kube-api-access-vvmwf\") pod \"metallb-operator-controller-manager-8599c478db-fs7c6\" (UID: \"5f42e180-91fb-42d0-bf64-66c83c63001b\") " pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.868605 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.993182 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26"] Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.994005 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.997872 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.998093 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 06 15:12:18 crc kubenswrapper[4888]: I1006 15:12:18.998272 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4p9t7" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.013578 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26"] Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.068187 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce47b00a-e949-469d-bb0c-618a40600f55-apiservice-cert\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.068295 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6v59\" (UniqueName: \"kubernetes.io/projected/ce47b00a-e949-469d-bb0c-618a40600f55-kube-api-access-w6v59\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.068331 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce47b00a-e949-469d-bb0c-618a40600f55-webhook-cert\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.170670 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6v59\" (UniqueName: \"kubernetes.io/projected/ce47b00a-e949-469d-bb0c-618a40600f55-kube-api-access-w6v59\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.171079 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce47b00a-e949-469d-bb0c-618a40600f55-webhook-cert\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.171131 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce47b00a-e949-469d-bb0c-618a40600f55-apiservice-cert\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.179152 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce47b00a-e949-469d-bb0c-618a40600f55-apiservice-cert\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.185344 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce47b00a-e949-469d-bb0c-618a40600f55-webhook-cert\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.202598 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6v59\" (UniqueName: \"kubernetes.io/projected/ce47b00a-e949-469d-bb0c-618a40600f55-kube-api-access-w6v59\") pod \"metallb-operator-webhook-server-8bb6f8b6-sfr26\" (UID: \"ce47b00a-e949-469d-bb0c-618a40600f55\") " pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.310166 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.349915 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6"] Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.610700 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26"] Oct 06 15:12:19 crc kubenswrapper[4888]: W1006 15:12:19.618634 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce47b00a_e949_469d_bb0c_618a40600f55.slice/crio-5a0031a1065f54206b70ecb10c7a13a1b6dbd98e2766c87f5c8d4b8aba3d0394 WatchSource:0}: Error finding container 5a0031a1065f54206b70ecb10c7a13a1b6dbd98e2766c87f5c8d4b8aba3d0394: Status 404 returned error can't find the container with id 5a0031a1065f54206b70ecb10c7a13a1b6dbd98e2766c87f5c8d4b8aba3d0394 Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.858976 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" event={"ID":"5f42e180-91fb-42d0-bf64-66c83c63001b","Type":"ContainerStarted","Data":"a69efff15e8e1b6907f7297830a640ff5d0186a0b8708dd0c8995e8a05dfad89"} Oct 06 15:12:19 crc kubenswrapper[4888]: I1006 15:12:19.859910 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" event={"ID":"ce47b00a-e949-469d-bb0c-618a40600f55","Type":"ContainerStarted","Data":"5a0031a1065f54206b70ecb10c7a13a1b6dbd98e2766c87f5c8d4b8aba3d0394"} Oct 06 15:12:25 crc kubenswrapper[4888]: I1006 15:12:25.901751 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" event={"ID":"5f42e180-91fb-42d0-bf64-66c83c63001b","Type":"ContainerStarted","Data":"1a2effebdb2884132283ae7bd1a1f374d28485853a589acfb2de5c07cf6d6bbf"} Oct 06 15:12:25 crc kubenswrapper[4888]: I1006 15:12:25.902278 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:25 crc kubenswrapper[4888]: I1006 15:12:25.903952 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" event={"ID":"ce47b00a-e949-469d-bb0c-618a40600f55","Type":"ContainerStarted","Data":"2b71b1b396d56476e5ad40232296b90851925270b22242322b8849b506badf02"} Oct 06 15:12:25 crc kubenswrapper[4888]: I1006 15:12:25.904095 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:25 crc kubenswrapper[4888]: I1006 15:12:25.919953 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" podStartSLOduration=1.9705696609999999 podStartE2EDuration="7.919933104s" podCreationTimestamp="2025-10-06 15:12:18 +0000 UTC" firstStartedPulling="2025-10-06 15:12:19.376402858 +0000 UTC m=+679.188753576" lastFinishedPulling="2025-10-06 15:12:25.325766301 +0000 UTC m=+685.138117019" observedRunningTime="2025-10-06 15:12:25.918004373 +0000 UTC m=+685.730355101" watchObservedRunningTime="2025-10-06 15:12:25.919933104 +0000 UTC m=+685.732283842" Oct 06 15:12:25 crc kubenswrapper[4888]: I1006 15:12:25.948473 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" podStartSLOduration=2.164278413 podStartE2EDuration="7.948456409s" podCreationTimestamp="2025-10-06 15:12:18 +0000 UTC" firstStartedPulling="2025-10-06 15:12:19.621477185 +0000 UTC m=+679.433827903" lastFinishedPulling="2025-10-06 15:12:25.405655181 +0000 UTC m=+685.218005899" observedRunningTime="2025-10-06 15:12:25.945880288 +0000 UTC m=+685.758231016" watchObservedRunningTime="2025-10-06 15:12:25.948456409 +0000 UTC m=+685.760807127" Oct 06 15:12:32 crc kubenswrapper[4888]: I1006 15:12:32.563289 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:12:32 crc kubenswrapper[4888]: I1006 15:12:32.563728 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:12:39 crc kubenswrapper[4888]: I1006 15:12:39.336291 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8bb6f8b6-sfr26" Oct 06 15:12:58 crc kubenswrapper[4888]: I1006 15:12:58.871934 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8599c478db-fs7c6" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.582967 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc"] Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.583822 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.585635 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.587865 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-m9bpz"] Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.590302 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2prt2" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.594171 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.598414 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc"] Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.599138 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.599351 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.693973 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-xd85f"] Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.695369 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.700270 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.700296 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.700447 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bj57p" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.701444 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.704414 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-conf\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.704692 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-sockets\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.704821 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2vb\" (UniqueName: \"kubernetes.io/projected/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-kube-api-access-pn2vb\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.704926 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-reloader\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.705013 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a700faf-c900-4a48-814b-568b4eb5b60c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-t22dc\" (UID: \"3a700faf-c900-4a48-814b-568b4eb5b60c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.705101 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.705208 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics-certs\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.705367 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-startup\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.705490 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plg4m\" (UniqueName: \"kubernetes.io/projected/3a700faf-c900-4a48-814b-568b4eb5b60c-kube-api-access-plg4m\") pod \"frr-k8s-webhook-server-64bf5d555-t22dc\" (UID: \"3a700faf-c900-4a48-814b-568b4eb5b60c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.717157 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-z57b5"] Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.718438 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.733436 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806596 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/12baee70-26bd-4484-92ec-a74a01b41356-metallb-excludel2\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806649 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-reloader\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806674 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a700faf-c900-4a48-814b-568b4eb5b60c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-t22dc\" (UID: \"3a700faf-c900-4a48-814b-568b4eb5b60c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806695 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806712 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-metrics-certs\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806729 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-cert\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806744 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806762 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics-certs\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806821 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tz5x\" (UniqueName: \"kubernetes.io/projected/a077b76d-9a5e-482a-8c03-efd3a93f1c62-kube-api-access-2tz5x\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806855 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-startup\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806877 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plg4m\" (UniqueName: \"kubernetes.io/projected/3a700faf-c900-4a48-814b-568b4eb5b60c-kube-api-access-plg4m\") pod \"frr-k8s-webhook-server-64bf5d555-t22dc\" (UID: \"3a700faf-c900-4a48-814b-568b4eb5b60c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806903 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-metrics-certs\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806918 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-conf\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806937 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-sockets\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806955 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2vb\" (UniqueName: \"kubernetes.io/projected/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-kube-api-access-pn2vb\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.806973 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpk55\" (UniqueName: \"kubernetes.io/projected/12baee70-26bd-4484-92ec-a74a01b41356-kube-api-access-mpk55\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.807100 4888 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.807160 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a700faf-c900-4a48-814b-568b4eb5b60c-cert podName:3a700faf-c900-4a48-814b-568b4eb5b60c nodeName:}" failed. No retries permitted until 2025-10-06 15:13:00.307139352 +0000 UTC m=+720.119490150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a700faf-c900-4a48-814b-568b4eb5b60c-cert") pod "frr-k8s-webhook-server-64bf5d555-t22dc" (UID: "3a700faf-c900-4a48-814b-568b4eb5b60c") : secret "frr-k8s-webhook-server-cert" not found Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.807340 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.807470 4888 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.807521 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics-certs podName:e8af0a6f-bf6e-4822-827f-6e40bf4c9f15 nodeName:}" failed. No retries permitted until 2025-10-06 15:13:00.307502674 +0000 UTC m=+720.119853482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics-certs") pod "frr-k8s-m9bpz" (UID: "e8af0a6f-bf6e-4822-827f-6e40bf4c9f15") : secret "frr-k8s-certs-secret" not found Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.807098 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-reloader\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.807760 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-sockets\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.808006 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-conf\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.808310 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-frr-startup\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.832446 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2vb\" (UniqueName: \"kubernetes.io/projected/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-kube-api-access-pn2vb\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.846484 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-z57b5"] Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.848518 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plg4m\" (UniqueName: \"kubernetes.io/projected/3a700faf-c900-4a48-814b-568b4eb5b60c-kube-api-access-plg4m\") pod \"frr-k8s-webhook-server-64bf5d555-t22dc\" (UID: \"3a700faf-c900-4a48-814b-568b4eb5b60c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.911519 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-metrics-certs\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.911575 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-cert\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.911598 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.911661 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tz5x\" (UniqueName: \"kubernetes.io/projected/a077b76d-9a5e-482a-8c03-efd3a93f1c62-kube-api-access-2tz5x\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.911723 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-metrics-certs\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.911754 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpk55\" (UniqueName: \"kubernetes.io/projected/12baee70-26bd-4484-92ec-a74a01b41356-kube-api-access-mpk55\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.911778 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/12baee70-26bd-4484-92ec-a74a01b41356-metallb-excludel2\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.911929 4888 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.911996 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist podName:12baee70-26bd-4484-92ec-a74a01b41356 nodeName:}" failed. No retries permitted until 2025-10-06 15:13:00.411978055 +0000 UTC m=+720.224328773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist") pod "speaker-xd85f" (UID: "12baee70-26bd-4484-92ec-a74a01b41356") : secret "metallb-memberlist" not found Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.912050 4888 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.912079 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-metrics-certs podName:12baee70-26bd-4484-92ec-a74a01b41356 nodeName:}" failed. No retries permitted until 2025-10-06 15:13:00.412069748 +0000 UTC m=+720.224420466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-metrics-certs") pod "speaker-xd85f" (UID: "12baee70-26bd-4484-92ec-a74a01b41356") : secret "speaker-certs-secret" not found Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.912529 4888 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Oct 06 15:12:59 crc kubenswrapper[4888]: E1006 15:12:59.912563 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-metrics-certs podName:a077b76d-9a5e-482a-8c03-efd3a93f1c62 nodeName:}" failed. No retries permitted until 2025-10-06 15:13:00.412552563 +0000 UTC m=+720.224903281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-metrics-certs") pod "controller-68d546b9d8-z57b5" (UID: "a077b76d-9a5e-482a-8c03-efd3a93f1c62") : secret "controller-certs-secret" not found Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.912641 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/12baee70-26bd-4484-92ec-a74a01b41356-metallb-excludel2\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.917265 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-cert\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.944755 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpk55\" (UniqueName: \"kubernetes.io/projected/12baee70-26bd-4484-92ec-a74a01b41356-kube-api-access-mpk55\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:12:59 crc kubenswrapper[4888]: I1006 15:12:59.963640 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tz5x\" (UniqueName: \"kubernetes.io/projected/a077b76d-9a5e-482a-8c03-efd3a93f1c62-kube-api-access-2tz5x\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.317511 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a700faf-c900-4a48-814b-568b4eb5b60c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-t22dc\" (UID: \"3a700faf-c900-4a48-814b-568b4eb5b60c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.317906 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics-certs\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.320755 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e8af0a6f-bf6e-4822-827f-6e40bf4c9f15-metrics-certs\") pod \"frr-k8s-m9bpz\" (UID: \"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15\") " pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.320898 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a700faf-c900-4a48-814b-568b4eb5b60c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-t22dc\" (UID: \"3a700faf-c900-4a48-814b-568b4eb5b60c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.419026 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-metrics-certs\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.419087 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-metrics-certs\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.419105 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:13:00 crc kubenswrapper[4888]: E1006 15:13:00.419185 4888 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 15:13:00 crc kubenswrapper[4888]: E1006 15:13:00.419229 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist podName:12baee70-26bd-4484-92ec-a74a01b41356 nodeName:}" failed. No retries permitted until 2025-10-06 15:13:01.419215097 +0000 UTC m=+721.231565815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist") pod "speaker-xd85f" (UID: "12baee70-26bd-4484-92ec-a74a01b41356") : secret "metallb-memberlist" not found Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.422041 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-metrics-certs\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.422520 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a077b76d-9a5e-482a-8c03-efd3a93f1c62-metrics-certs\") pod \"controller-68d546b9d8-z57b5\" (UID: \"a077b76d-9a5e-482a-8c03-efd3a93f1c62\") " pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.502604 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.516670 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.635628 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.852245 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-z57b5"] Oct 06 15:13:00 crc kubenswrapper[4888]: W1006 15:13:00.859434 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda077b76d_9a5e_482a_8c03_efd3a93f1c62.slice/crio-ec5ec6cbe7f34e113d5f03f1ed5b8319e4b4c6159759d2df8e2afeb45c5ef3cd WatchSource:0}: Error finding container ec5ec6cbe7f34e113d5f03f1ed5b8319e4b4c6159759d2df8e2afeb45c5ef3cd: Status 404 returned error can't find the container with id ec5ec6cbe7f34e113d5f03f1ed5b8319e4b4c6159759d2df8e2afeb45c5ef3cd Oct 06 15:13:00 crc kubenswrapper[4888]: I1006 15:13:00.940006 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc"] Oct 06 15:13:01 crc kubenswrapper[4888]: I1006 15:13:01.105688 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerStarted","Data":"6bdd11f37341096372ff3f1d6fe082e25f14f95a43c7d8120c3cf60178f30d97"} Oct 06 15:13:01 crc kubenswrapper[4888]: I1006 15:13:01.106836 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" event={"ID":"3a700faf-c900-4a48-814b-568b4eb5b60c","Type":"ContainerStarted","Data":"4d06dfa88867dcf48944af4635150fb49a58fe1842c8da69a7d356969bed7078"} Oct 06 15:13:01 crc kubenswrapper[4888]: I1006 15:13:01.108128 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-z57b5" event={"ID":"a077b76d-9a5e-482a-8c03-efd3a93f1c62","Type":"ContainerStarted","Data":"b0635cc1282f19df1a78abc1a07c8b13f7cde49bb0ab402403fec1d5d60882e4"} Oct 06 15:13:01 crc kubenswrapper[4888]: I1006 15:13:01.108157 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-z57b5" event={"ID":"a077b76d-9a5e-482a-8c03-efd3a93f1c62","Type":"ContainerStarted","Data":"ec5ec6cbe7f34e113d5f03f1ed5b8319e4b4c6159759d2df8e2afeb45c5ef3cd"} Oct 06 15:13:01 crc kubenswrapper[4888]: I1006 15:13:01.434040 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:13:01 crc kubenswrapper[4888]: I1006 15:13:01.440604 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/12baee70-26bd-4484-92ec-a74a01b41356-memberlist\") pod \"speaker-xd85f\" (UID: \"12baee70-26bd-4484-92ec-a74a01b41356\") " pod="metallb-system/speaker-xd85f" Oct 06 15:13:01 crc kubenswrapper[4888]: I1006 15:13:01.513246 4888 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bj57p" Oct 06 15:13:01 crc kubenswrapper[4888]: I1006 15:13:01.521087 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-xd85f" Oct 06 15:13:01 crc kubenswrapper[4888]: W1006 15:13:01.543349 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12baee70_26bd_4484_92ec_a74a01b41356.slice/crio-4d1fdd2ad1a0de53a3aea0e289c45a4e063cfb1df5f4c4895ca993201aee54c2 WatchSource:0}: Error finding container 4d1fdd2ad1a0de53a3aea0e289c45a4e063cfb1df5f4c4895ca993201aee54c2: Status 404 returned error can't find the container with id 4d1fdd2ad1a0de53a3aea0e289c45a4e063cfb1df5f4c4895ca993201aee54c2 Oct 06 15:13:02 crc kubenswrapper[4888]: I1006 15:13:02.116531 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-z57b5" event={"ID":"a077b76d-9a5e-482a-8c03-efd3a93f1c62","Type":"ContainerStarted","Data":"8a559f72201c7e2314f7ff288c8e73acfaa9296f3ffbee43c82354c823b3e573"} Oct 06 15:13:02 crc kubenswrapper[4888]: I1006 15:13:02.116652 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:13:02 crc kubenswrapper[4888]: I1006 15:13:02.118598 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xd85f" event={"ID":"12baee70-26bd-4484-92ec-a74a01b41356","Type":"ContainerStarted","Data":"7f5570a79d50735a98d712aa68f6ff2f294ac4b02dc2c90666d4b872421364c1"} Oct 06 15:13:02 crc kubenswrapper[4888]: I1006 15:13:02.118960 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xd85f" event={"ID":"12baee70-26bd-4484-92ec-a74a01b41356","Type":"ContainerStarted","Data":"4d1fdd2ad1a0de53a3aea0e289c45a4e063cfb1df5f4c4895ca993201aee54c2"} Oct 06 15:13:02 crc kubenswrapper[4888]: I1006 15:13:02.155640 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-z57b5" podStartSLOduration=3.155620685 podStartE2EDuration="3.155620685s" podCreationTimestamp="2025-10-06 15:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:13:02.15195568 +0000 UTC m=+721.964306408" watchObservedRunningTime="2025-10-06 15:13:02.155620685 +0000 UTC m=+721.967971403" Oct 06 15:13:02 crc kubenswrapper[4888]: I1006 15:13:02.563413 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:13:02 crc kubenswrapper[4888]: I1006 15:13:02.563472 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:13:03 crc kubenswrapper[4888]: I1006 15:13:03.144258 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-xd85f" event={"ID":"12baee70-26bd-4484-92ec-a74a01b41356","Type":"ContainerStarted","Data":"f2bc37d89d8c6966752c8ada34ef6cd604cc86d80b05d819463901f6b696b611"} Oct 06 15:13:03 crc kubenswrapper[4888]: I1006 15:13:03.144990 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-xd85f" Oct 06 15:13:03 crc kubenswrapper[4888]: I1006 15:13:03.173588 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-xd85f" podStartSLOduration=4.173555338 podStartE2EDuration="4.173555338s" podCreationTimestamp="2025-10-06 15:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:13:03.171580106 +0000 UTC m=+722.983930824" watchObservedRunningTime="2025-10-06 15:13:03.173555338 +0000 UTC m=+722.985906056" Oct 06 15:13:06 crc kubenswrapper[4888]: I1006 15:13:06.671023 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bphs2"] Oct 06 15:13:06 crc kubenswrapper[4888]: I1006 15:13:06.671461 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" containerName="controller-manager" containerID="cri-o://96013f6c01db5b3b35dd5a9f325cd177e6ff9c629b2901554b08cc7eb1210b00" gracePeriod=30 Oct 06 15:13:06 crc kubenswrapper[4888]: I1006 15:13:06.782424 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t"] Oct 06 15:13:06 crc kubenswrapper[4888]: I1006 15:13:06.782924 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" podUID="4eb536d7-2076-4a86-ba81-e1c746ab6cf6" containerName="route-controller-manager" containerID="cri-o://042d7871103f4cc24d117f5fa2254a12ca9f367c1753eb2477b5cfa322cb073b" gracePeriod=30 Oct 06 15:13:07 crc kubenswrapper[4888]: I1006 15:13:07.168457 4888 generic.go:334] "Generic (PLEG): container finished" podID="8b65d758-78c2-4e61-8553-2298157b49a3" containerID="96013f6c01db5b3b35dd5a9f325cd177e6ff9c629b2901554b08cc7eb1210b00" exitCode=0 Oct 06 15:13:07 crc kubenswrapper[4888]: I1006 15:13:07.168549 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" event={"ID":"8b65d758-78c2-4e61-8553-2298157b49a3","Type":"ContainerDied","Data":"96013f6c01db5b3b35dd5a9f325cd177e6ff9c629b2901554b08cc7eb1210b00"} Oct 06 15:13:07 crc kubenswrapper[4888]: I1006 15:13:07.171547 4888 generic.go:334] "Generic (PLEG): container finished" podID="4eb536d7-2076-4a86-ba81-e1c746ab6cf6" containerID="042d7871103f4cc24d117f5fa2254a12ca9f367c1753eb2477b5cfa322cb073b" exitCode=0 Oct 06 15:13:07 crc kubenswrapper[4888]: I1006 15:13:07.171586 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" event={"ID":"4eb536d7-2076-4a86-ba81-e1c746ab6cf6","Type":"ContainerDied","Data":"042d7871103f4cc24d117f5fa2254a12ca9f367c1753eb2477b5cfa322cb073b"} Oct 06 15:13:07 crc kubenswrapper[4888]: I1006 15:13:07.305351 4888 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bphs2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Oct 06 15:13:07 crc kubenswrapper[4888]: I1006 15:13:07.305409 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.439039 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.493726 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.526590 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c46f6d68-rfxsk"] Oct 06 15:13:09 crc kubenswrapper[4888]: E1006 15:13:09.526949 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb536d7-2076-4a86-ba81-e1c746ab6cf6" containerName="route-controller-manager" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.526966 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb536d7-2076-4a86-ba81-e1c746ab6cf6" containerName="route-controller-manager" Oct 06 15:13:09 crc kubenswrapper[4888]: E1006 15:13:09.526991 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" containerName="controller-manager" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.527000 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" containerName="controller-manager" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.527136 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" containerName="controller-manager" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.527154 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb536d7-2076-4a86-ba81-e1c746ab6cf6" containerName="route-controller-manager" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.527609 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.549284 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c46f6d68-rfxsk"] Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.558746 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-client-ca\") pod \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.558824 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gprn\" (UniqueName: \"kubernetes.io/projected/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-kube-api-access-7gprn\") pod \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.558893 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-serving-cert\") pod \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.558928 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-proxy-ca-bundles\") pod \"8b65d758-78c2-4e61-8553-2298157b49a3\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.558959 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpw2m\" (UniqueName: \"kubernetes.io/projected/8b65d758-78c2-4e61-8553-2298157b49a3-kube-api-access-fpw2m\") pod \"8b65d758-78c2-4e61-8553-2298157b49a3\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.558993 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-config\") pod \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\" (UID: \"4eb536d7-2076-4a86-ba81-e1c746ab6cf6\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.559024 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b65d758-78c2-4e61-8553-2298157b49a3-serving-cert\") pod \"8b65d758-78c2-4e61-8553-2298157b49a3\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.559087 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-config\") pod \"8b65d758-78c2-4e61-8553-2298157b49a3\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.559110 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-client-ca\") pod \"8b65d758-78c2-4e61-8553-2298157b49a3\" (UID: \"8b65d758-78c2-4e61-8553-2298157b49a3\") " Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.560511 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b65d758-78c2-4e61-8553-2298157b49a3" (UID: "8b65d758-78c2-4e61-8553-2298157b49a3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.560579 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b65d758-78c2-4e61-8553-2298157b49a3" (UID: "8b65d758-78c2-4e61-8553-2298157b49a3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.576792 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4eb536d7-2076-4a86-ba81-e1c746ab6cf6" (UID: "4eb536d7-2076-4a86-ba81-e1c746ab6cf6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.577233 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-kube-api-access-7gprn" (OuterVolumeSpecName: "kube-api-access-7gprn") pod "4eb536d7-2076-4a86-ba81-e1c746ab6cf6" (UID: "4eb536d7-2076-4a86-ba81-e1c746ab6cf6"). InnerVolumeSpecName "kube-api-access-7gprn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.577285 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-config" (OuterVolumeSpecName: "config") pod "8b65d758-78c2-4e61-8553-2298157b49a3" (UID: "8b65d758-78c2-4e61-8553-2298157b49a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.565308 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-config" (OuterVolumeSpecName: "config") pod "4eb536d7-2076-4a86-ba81-e1c746ab6cf6" (UID: "4eb536d7-2076-4a86-ba81-e1c746ab6cf6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.578948 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b65d758-78c2-4e61-8553-2298157b49a3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b65d758-78c2-4e61-8553-2298157b49a3" (UID: "8b65d758-78c2-4e61-8553-2298157b49a3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.580338 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b65d758-78c2-4e61-8553-2298157b49a3-kube-api-access-fpw2m" (OuterVolumeSpecName: "kube-api-access-fpw2m") pod "8b65d758-78c2-4e61-8553-2298157b49a3" (UID: "8b65d758-78c2-4e61-8553-2298157b49a3"). InnerVolumeSpecName "kube-api-access-fpw2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.581991 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-client-ca" (OuterVolumeSpecName: "client-ca") pod "4eb536d7-2076-4a86-ba81-e1c746ab6cf6" (UID: "4eb536d7-2076-4a86-ba81-e1c746ab6cf6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.661034 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-client-ca\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.661548 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgnv\" (UniqueName: \"kubernetes.io/projected/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-kube-api-access-7hgnv\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.661661 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-proxy-ca-bundles\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.661858 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-config\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.661981 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-serving-cert\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662102 4888 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662185 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpw2m\" (UniqueName: \"kubernetes.io/projected/8b65d758-78c2-4e61-8553-2298157b49a3-kube-api-access-fpw2m\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662258 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662351 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b65d758-78c2-4e61-8553-2298157b49a3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662438 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662519 4888 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b65d758-78c2-4e61-8553-2298157b49a3-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662601 4888 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662676 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gprn\" (UniqueName: \"kubernetes.io/projected/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-kube-api-access-7gprn\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.662738 4888 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eb536d7-2076-4a86-ba81-e1c746ab6cf6-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.764001 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-client-ca\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.764103 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgnv\" (UniqueName: \"kubernetes.io/projected/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-kube-api-access-7hgnv\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.764130 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-proxy-ca-bundles\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.764192 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-config\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.764220 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-serving-cert\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.765142 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-client-ca\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.765751 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-proxy-ca-bundles\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.765903 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-config\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.767968 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-serving-cert\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.783309 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgnv\" (UniqueName: \"kubernetes.io/projected/67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94-kube-api-access-7hgnv\") pod \"controller-manager-58c46f6d68-rfxsk\" (UID: \"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94\") " pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:09 crc kubenswrapper[4888]: I1006 15:13:09.858675 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.191512 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" event={"ID":"3a700faf-c900-4a48-814b-568b4eb5b60c","Type":"ContainerStarted","Data":"632cea00cd0ec7bfd3804569f645cba614ac456863fbd30138c4092c33c53bb7"} Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.192249 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.194115 4888 generic.go:334] "Generic (PLEG): container finished" podID="e8af0a6f-bf6e-4822-827f-6e40bf4c9f15" containerID="0e2e052ad78d2c28d9f690bb3b69542d6ee9f191006e3e9cd834494dc7e8fb98" exitCode=0 Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.194358 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerDied","Data":"0e2e052ad78d2c28d9f690bb3b69542d6ee9f191006e3e9cd834494dc7e8fb98"} Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.195749 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" event={"ID":"8b65d758-78c2-4e61-8553-2298157b49a3","Type":"ContainerDied","Data":"cf7033e517b66243144d9fa958b78fdc8b916eed67643396e1f0906e6789f060"} Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.195781 4888 scope.go:117] "RemoveContainer" containerID="96013f6c01db5b3b35dd5a9f325cd177e6ff9c629b2901554b08cc7eb1210b00" Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.195946 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bphs2" Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.201124 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" event={"ID":"4eb536d7-2076-4a86-ba81-e1c746ab6cf6","Type":"ContainerDied","Data":"6e44df419757ee3867172e43f071653bc9e5c8dd3b2a7f2add399b03bc0c88bd"} Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.201230 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t" Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.222264 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" podStartSLOduration=2.968085746 podStartE2EDuration="11.2222473s" podCreationTimestamp="2025-10-06 15:12:59 +0000 UTC" firstStartedPulling="2025-10-06 15:13:00.960749957 +0000 UTC m=+720.773100675" lastFinishedPulling="2025-10-06 15:13:09.214911511 +0000 UTC m=+729.027262229" observedRunningTime="2025-10-06 15:13:10.220586768 +0000 UTC m=+730.032937496" watchObservedRunningTime="2025-10-06 15:13:10.2222473 +0000 UTC m=+730.034598018" Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.233821 4888 scope.go:117] "RemoveContainer" containerID="042d7871103f4cc24d117f5fa2254a12ca9f367c1753eb2477b5cfa322cb073b" Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.281423 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bphs2"] Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.284522 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bphs2"] Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.295985 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t"] Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.308662 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzv4t"] Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.329379 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c46f6d68-rfxsk"] Oct 06 15:13:10 crc kubenswrapper[4888]: W1006 15:13:10.337514 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b9d7a7_8969_4e16_a6f6_f9ecc72c8c94.slice/crio-b041360f8c89b06f7e077a4503029b28bb20dd47d20adf9844bdc5f55f355d03 WatchSource:0}: Error finding container b041360f8c89b06f7e077a4503029b28bb20dd47d20adf9844bdc5f55f355d03: Status 404 returned error can't find the container with id b041360f8c89b06f7e077a4503029b28bb20dd47d20adf9844bdc5f55f355d03 Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.929682 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb536d7-2076-4a86-ba81-e1c746ab6cf6" path="/var/lib/kubelet/pods/4eb536d7-2076-4a86-ba81-e1c746ab6cf6/volumes" Oct 06 15:13:10 crc kubenswrapper[4888]: I1006 15:13:10.930487 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b65d758-78c2-4e61-8553-2298157b49a3" path="/var/lib/kubelet/pods/8b65d758-78c2-4e61-8553-2298157b49a3/volumes" Oct 06 15:13:11 crc kubenswrapper[4888]: I1006 15:13:11.208516 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" event={"ID":"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94","Type":"ContainerStarted","Data":"465e162c60ce5c882af81f6fa31db9f1b9c5ecf8b9466d7c1956b8a2d60b3dbf"} Oct 06 15:13:11 crc kubenswrapper[4888]: I1006 15:13:11.208556 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" event={"ID":"67b9d7a7-8969-4e16-a6f6-f9ecc72c8c94","Type":"ContainerStarted","Data":"b041360f8c89b06f7e077a4503029b28bb20dd47d20adf9844bdc5f55f355d03"} Oct 06 15:13:11 crc kubenswrapper[4888]: I1006 15:13:11.209084 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:11 crc kubenswrapper[4888]: I1006 15:13:11.213273 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" Oct 06 15:13:11 crc kubenswrapper[4888]: I1006 15:13:11.213734 4888 generic.go:334] "Generic (PLEG): container finished" podID="e8af0a6f-bf6e-4822-827f-6e40bf4c9f15" containerID="6c7569cfb8a3a93851833e4a0c0965c835de461210f28058f8634da8d903fa36" exitCode=0 Oct 06 15:13:11 crc kubenswrapper[4888]: I1006 15:13:11.213905 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerDied","Data":"6c7569cfb8a3a93851833e4a0c0965c835de461210f28058f8634da8d903fa36"} Oct 06 15:13:11 crc kubenswrapper[4888]: I1006 15:13:11.236723 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c46f6d68-rfxsk" podStartSLOduration=5.236699753 podStartE2EDuration="5.236699753s" podCreationTimestamp="2025-10-06 15:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:13:11.234734101 +0000 UTC m=+731.047084829" watchObservedRunningTime="2025-10-06 15:13:11.236699753 +0000 UTC m=+731.049050471" Oct 06 15:13:11 crc kubenswrapper[4888]: I1006 15:13:11.525644 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-xd85f" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.222129 4888 generic.go:334] "Generic (PLEG): container finished" podID="e8af0a6f-bf6e-4822-827f-6e40bf4c9f15" containerID="13bfd21f525048d7dc40649a08c8f4b330437c0959fcf30a0e42c0aed6878810" exitCode=0 Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.222202 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerDied","Data":"13bfd21f525048d7dc40649a08c8f4b330437c0959fcf30a0e42c0aed6878810"} Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.371755 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg"] Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.372908 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.377188 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.377375 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.377544 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.377576 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.377639 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.386480 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.402142 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg"] Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.403625 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-config\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.403724 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-serving-cert\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.403773 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-client-ca\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.403843 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzpj\" (UniqueName: \"kubernetes.io/projected/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-kube-api-access-lgzpj\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.504711 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-serving-cert\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.504824 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-client-ca\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.504857 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzpj\" (UniqueName: \"kubernetes.io/projected/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-kube-api-access-lgzpj\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.504917 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-config\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.506262 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-client-ca\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.507225 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-config\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.519863 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-serving-cert\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.527466 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzpj\" (UniqueName: \"kubernetes.io/projected/2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2-kube-api-access-lgzpj\") pod \"route-controller-manager-7c49b44965-kr5dg\" (UID: \"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2\") " pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:12 crc kubenswrapper[4888]: I1006 15:13:12.690163 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.086767 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg"] Oct 06 15:13:13 crc kubenswrapper[4888]: W1006 15:13:13.095343 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d8f8aa8_d8b8_4442_9feb_9f268c8c98a2.slice/crio-de6dcbb309419dc80b9004893ba381ad8b292240eb748719b4d63535f14fa625 WatchSource:0}: Error finding container de6dcbb309419dc80b9004893ba381ad8b292240eb748719b4d63535f14fa625: Status 404 returned error can't find the container with id de6dcbb309419dc80b9004893ba381ad8b292240eb748719b4d63535f14fa625 Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.232618 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerStarted","Data":"92810694523d8e1f41aa2e9c99a9ed9a5714275f6230f200cdd9b43993f0dfa2"} Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.232658 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerStarted","Data":"2c203eb963974ea0973592595ba47371881009766d0806fd591959e8dd0405ae"} Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.232669 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerStarted","Data":"dbae00370a937364a555f47d081c478908ab19cb34d9a9154a85eef39abe0a50"} Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.232677 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerStarted","Data":"71a4d9e2a50566469cff45b42d0b04f353e623087ecbeaf4d25752b5314211cb"} Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.232686 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerStarted","Data":"640c7ba5fff6bb449ee680d91768fd4b63bdd8142f62fd4c9fb13bf67b4ad1bc"} Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.236393 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" event={"ID":"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2","Type":"ContainerStarted","Data":"7b0dbda9144fb9450456c12ac3e79d599b8e165e74780526a0d3b52e7c212053"} Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.236433 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" event={"ID":"2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2","Type":"ContainerStarted","Data":"de6dcbb309419dc80b9004893ba381ad8b292240eb748719b4d63535f14fa625"} Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.236587 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.237910 4888 patch_prober.go:28] interesting pod/route-controller-manager-7c49b44965-kr5dg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" start-of-body= Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.237949 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" podUID="2d8f8aa8-d8b8-4442-9feb-9f268c8c98a2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" Oct 06 15:13:13 crc kubenswrapper[4888]: I1006 15:13:13.259386 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" podStartSLOduration=5.259366723 podStartE2EDuration="5.259366723s" podCreationTimestamp="2025-10-06 15:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:13:13.256306387 +0000 UTC m=+733.068657105" watchObservedRunningTime="2025-10-06 15:13:13.259366723 +0000 UTC m=+733.071717441" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.247652 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m9bpz" event={"ID":"e8af0a6f-bf6e-4822-827f-6e40bf4c9f15","Type":"ContainerStarted","Data":"4e3536f34499c4f72383347d47345260198573b75f0cfb07d76813eab29d18f4"} Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.257442 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c49b44965-kr5dg" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.274354 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-m9bpz" podStartSLOduration=6.7706225490000005 podStartE2EDuration="15.274334562s" podCreationTimestamp="2025-10-06 15:12:59 +0000 UTC" firstStartedPulling="2025-10-06 15:13:00.653125614 +0000 UTC m=+720.465476342" lastFinishedPulling="2025-10-06 15:13:09.156837637 +0000 UTC m=+728.969188355" observedRunningTime="2025-10-06 15:13:14.270075018 +0000 UTC m=+734.082425746" watchObservedRunningTime="2025-10-06 15:13:14.274334562 +0000 UTC m=+734.086685290" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.565987 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-h8ccp"] Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.566879 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h8ccp" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.568683 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.568922 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.621099 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h8ccp"] Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.635468 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvb9p\" (UniqueName: \"kubernetes.io/projected/78c4b58c-5148-478d-8a01-398469842798-kube-api-access-mvb9p\") pod \"openstack-operator-index-h8ccp\" (UID: \"78c4b58c-5148-478d-8a01-398469842798\") " pod="openstack-operators/openstack-operator-index-h8ccp" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.737140 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvb9p\" (UniqueName: \"kubernetes.io/projected/78c4b58c-5148-478d-8a01-398469842798-kube-api-access-mvb9p\") pod \"openstack-operator-index-h8ccp\" (UID: \"78c4b58c-5148-478d-8a01-398469842798\") " pod="openstack-operators/openstack-operator-index-h8ccp" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.755969 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvb9p\" (UniqueName: \"kubernetes.io/projected/78c4b58c-5148-478d-8a01-398469842798-kube-api-access-mvb9p\") pod \"openstack-operator-index-h8ccp\" (UID: \"78c4b58c-5148-478d-8a01-398469842798\") " pod="openstack-operators/openstack-operator-index-h8ccp" Oct 06 15:13:14 crc kubenswrapper[4888]: I1006 15:13:14.883162 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h8ccp" Oct 06 15:13:15 crc kubenswrapper[4888]: I1006 15:13:15.254248 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:13:15 crc kubenswrapper[4888]: I1006 15:13:15.321081 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-h8ccp"] Oct 06 15:13:15 crc kubenswrapper[4888]: I1006 15:13:15.517348 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:13:15 crc kubenswrapper[4888]: I1006 15:13:15.565399 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:13:16 crc kubenswrapper[4888]: I1006 15:13:16.258934 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h8ccp" event={"ID":"78c4b58c-5148-478d-8a01-398469842798","Type":"ContainerStarted","Data":"12366558bdf390cf36d8d238a45fb982f220890b701d5794ab5d7875e975a1b5"} Oct 06 15:13:16 crc kubenswrapper[4888]: I1006 15:13:16.517923 4888 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 15:13:17 crc kubenswrapper[4888]: I1006 15:13:17.945275 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-h8ccp"] Oct 06 15:13:18 crc kubenswrapper[4888]: I1006 15:13:18.547901 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-s88cv"] Oct 06 15:13:18 crc kubenswrapper[4888]: I1006 15:13:18.549279 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:18 crc kubenswrapper[4888]: I1006 15:13:18.554126 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-szh8j" Oct 06 15:13:18 crc kubenswrapper[4888]: I1006 15:13:18.559435 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s88cv"] Oct 06 15:13:18 crc kubenswrapper[4888]: I1006 15:13:18.600058 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl8dk\" (UniqueName: \"kubernetes.io/projected/2ec65a14-9b20-4a08-853e-9beb385d3883-kube-api-access-nl8dk\") pod \"openstack-operator-index-s88cv\" (UID: \"2ec65a14-9b20-4a08-853e-9beb385d3883\") " pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:18 crc kubenswrapper[4888]: I1006 15:13:18.700953 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl8dk\" (UniqueName: \"kubernetes.io/projected/2ec65a14-9b20-4a08-853e-9beb385d3883-kube-api-access-nl8dk\") pod \"openstack-operator-index-s88cv\" (UID: \"2ec65a14-9b20-4a08-853e-9beb385d3883\") " pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:18 crc kubenswrapper[4888]: I1006 15:13:18.721763 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl8dk\" (UniqueName: \"kubernetes.io/projected/2ec65a14-9b20-4a08-853e-9beb385d3883-kube-api-access-nl8dk\") pod \"openstack-operator-index-s88cv\" (UID: \"2ec65a14-9b20-4a08-853e-9beb385d3883\") " pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:18 crc kubenswrapper[4888]: I1006 15:13:18.896054 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:19 crc kubenswrapper[4888]: I1006 15:13:19.275658 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h8ccp" event={"ID":"78c4b58c-5148-478d-8a01-398469842798","Type":"ContainerStarted","Data":"5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22"} Oct 06 15:13:19 crc kubenswrapper[4888]: I1006 15:13:19.275922 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-h8ccp" podUID="78c4b58c-5148-478d-8a01-398469842798" containerName="registry-server" containerID="cri-o://5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22" gracePeriod=2 Oct 06 15:13:19 crc kubenswrapper[4888]: I1006 15:13:19.313724 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-h8ccp" podStartSLOduration=2.023005627 podStartE2EDuration="5.313694624s" podCreationTimestamp="2025-10-06 15:13:14 +0000 UTC" firstStartedPulling="2025-10-06 15:13:15.331707273 +0000 UTC m=+735.144057991" lastFinishedPulling="2025-10-06 15:13:18.62239627 +0000 UTC m=+738.434746988" observedRunningTime="2025-10-06 15:13:19.299759855 +0000 UTC m=+739.112110593" watchObservedRunningTime="2025-10-06 15:13:19.313694624 +0000 UTC m=+739.126045352" Oct 06 15:13:19 crc kubenswrapper[4888]: I1006 15:13:19.314147 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s88cv"] Oct 06 15:13:19 crc kubenswrapper[4888]: I1006 15:13:19.747275 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h8ccp" Oct 06 15:13:19 crc kubenswrapper[4888]: I1006 15:13:19.820509 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvb9p\" (UniqueName: \"kubernetes.io/projected/78c4b58c-5148-478d-8a01-398469842798-kube-api-access-mvb9p\") pod \"78c4b58c-5148-478d-8a01-398469842798\" (UID: \"78c4b58c-5148-478d-8a01-398469842798\") " Oct 06 15:13:19 crc kubenswrapper[4888]: I1006 15:13:19.826742 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c4b58c-5148-478d-8a01-398469842798-kube-api-access-mvb9p" (OuterVolumeSpecName: "kube-api-access-mvb9p") pod "78c4b58c-5148-478d-8a01-398469842798" (UID: "78c4b58c-5148-478d-8a01-398469842798"). InnerVolumeSpecName "kube-api-access-mvb9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:13:19 crc kubenswrapper[4888]: I1006 15:13:19.922503 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvb9p\" (UniqueName: \"kubernetes.io/projected/78c4b58c-5148-478d-8a01-398469842798-kube-api-access-mvb9p\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.284035 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s88cv" event={"ID":"2ec65a14-9b20-4a08-853e-9beb385d3883","Type":"ContainerStarted","Data":"72732d119943f19fbb921ecff541d9bdc12d8e1112f4a24289118882990e8bb5"} Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.284074 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s88cv" event={"ID":"2ec65a14-9b20-4a08-853e-9beb385d3883","Type":"ContainerStarted","Data":"3d877e724735b4db9410d490bf0efe6b8e40f408bdd5844caf54f36d996b9ea0"} Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.286902 4888 generic.go:334] "Generic (PLEG): container finished" podID="78c4b58c-5148-478d-8a01-398469842798" containerID="5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22" exitCode=0 Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.286954 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-h8ccp" Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.286956 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h8ccp" event={"ID":"78c4b58c-5148-478d-8a01-398469842798","Type":"ContainerDied","Data":"5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22"} Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.287124 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-h8ccp" event={"ID":"78c4b58c-5148-478d-8a01-398469842798","Type":"ContainerDied","Data":"12366558bdf390cf36d8d238a45fb982f220890b701d5794ab5d7875e975a1b5"} Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.287165 4888 scope.go:117] "RemoveContainer" containerID="5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22" Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.307597 4888 scope.go:117] "RemoveContainer" containerID="5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22" Oct 06 15:13:20 crc kubenswrapper[4888]: E1006 15:13:20.308116 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22\": container with ID starting with 5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22 not found: ID does not exist" containerID="5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22" Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.308164 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22"} err="failed to get container status \"5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22\": rpc error: code = NotFound desc = could not find container \"5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22\": container with ID starting with 5ac4b8dd010d07b0819716aef7d5de7e2cdf6828f7f110edaae106ae94dcbb22 not found: ID does not exist" Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.325149 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-s88cv" podStartSLOduration=2.2725333 podStartE2EDuration="2.325133921s" podCreationTimestamp="2025-10-06 15:13:18 +0000 UTC" firstStartedPulling="2025-10-06 15:13:19.363701204 +0000 UTC m=+739.176051922" lastFinishedPulling="2025-10-06 15:13:19.416301825 +0000 UTC m=+739.228652543" observedRunningTime="2025-10-06 15:13:20.313217947 +0000 UTC m=+740.125568675" watchObservedRunningTime="2025-10-06 15:13:20.325133921 +0000 UTC m=+740.137484639" Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.327507 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-h8ccp"] Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.331515 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-h8ccp"] Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.507085 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-t22dc" Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.641186 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-z57b5" Oct 06 15:13:20 crc kubenswrapper[4888]: I1006 15:13:20.928956 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c4b58c-5148-478d-8a01-398469842798" path="/var/lib/kubelet/pods/78c4b58c-5148-478d-8a01-398469842798/volumes" Oct 06 15:13:28 crc kubenswrapper[4888]: I1006 15:13:28.896750 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:28 crc kubenswrapper[4888]: I1006 15:13:28.897529 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:28 crc kubenswrapper[4888]: I1006 15:13:28.933786 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:29 crc kubenswrapper[4888]: I1006 15:13:29.364056 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-s88cv" Oct 06 15:13:30 crc kubenswrapper[4888]: I1006 15:13:30.520419 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-m9bpz" Oct 06 15:13:32 crc kubenswrapper[4888]: I1006 15:13:32.564367 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:13:32 crc kubenswrapper[4888]: I1006 15:13:32.564684 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:13:32 crc kubenswrapper[4888]: I1006 15:13:32.564735 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:13:32 crc kubenswrapper[4888]: I1006 15:13:32.565384 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a27765b71e89e6df1e1c89446e393b644a2f95a6e1272b73bf5478141df6f61"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:13:32 crc kubenswrapper[4888]: I1006 15:13:32.565445 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://2a27765b71e89e6df1e1c89446e393b644a2f95a6e1272b73bf5478141df6f61" gracePeriod=600 Oct 06 15:13:33 crc kubenswrapper[4888]: I1006 15:13:33.363536 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="2a27765b71e89e6df1e1c89446e393b644a2f95a6e1272b73bf5478141df6f61" exitCode=0 Oct 06 15:13:33 crc kubenswrapper[4888]: I1006 15:13:33.363619 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"2a27765b71e89e6df1e1c89446e393b644a2f95a6e1272b73bf5478141df6f61"} Oct 06 15:13:33 crc kubenswrapper[4888]: I1006 15:13:33.363877 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"c7f872a375e0d5fa3a0376b8ecf93b05be1a27ff35604df3b986a455e732259f"} Oct 06 15:13:33 crc kubenswrapper[4888]: I1006 15:13:33.363893 4888 scope.go:117] "RemoveContainer" containerID="e98908deb283e6a036eb37ab0790f5913cfd911db2848acf3a6ebbd35a13b160" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.771074 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7"] Oct 06 15:13:36 crc kubenswrapper[4888]: E1006 15:13:36.772608 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c4b58c-5148-478d-8a01-398469842798" containerName="registry-server" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.772683 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c4b58c-5148-478d-8a01-398469842798" containerName="registry-server" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.772883 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c4b58c-5148-478d-8a01-398469842798" containerName="registry-server" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.773727 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.776763 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-p8bxl" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.785404 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7"] Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.837769 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-bundle\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.837834 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-util\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.837858 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtqwf\" (UniqueName: \"kubernetes.io/projected/f5eae805-9f70-47d7-b612-6ee46380d5a4-kube-api-access-gtqwf\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.939674 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-bundle\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.939983 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-util\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.940124 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtqwf\" (UniqueName: \"kubernetes.io/projected/f5eae805-9f70-47d7-b612-6ee46380d5a4-kube-api-access-gtqwf\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.940437 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-bundle\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.940489 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-util\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:36 crc kubenswrapper[4888]: I1006 15:13:36.962420 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtqwf\" (UniqueName: \"kubernetes.io/projected/f5eae805-9f70-47d7-b612-6ee46380d5a4-kube-api-access-gtqwf\") pod \"0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:37 crc kubenswrapper[4888]: I1006 15:13:37.092842 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:37 crc kubenswrapper[4888]: I1006 15:13:37.479430 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7"] Oct 06 15:13:37 crc kubenswrapper[4888]: W1006 15:13:37.498776 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5eae805_9f70_47d7_b612_6ee46380d5a4.slice/crio-995d934827507dd887ab0807f85b4e05481752c30b783c6a40df13e1d5ca33a7 WatchSource:0}: Error finding container 995d934827507dd887ab0807f85b4e05481752c30b783c6a40df13e1d5ca33a7: Status 404 returned error can't find the container with id 995d934827507dd887ab0807f85b4e05481752c30b783c6a40df13e1d5ca33a7 Oct 06 15:13:38 crc kubenswrapper[4888]: I1006 15:13:38.397958 4888 generic.go:334] "Generic (PLEG): container finished" podID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerID="30ab75a39421e41bbe1e174b760db81cbf86190fb69b6ed41200a6f13dc53ca8" exitCode=0 Oct 06 15:13:38 crc kubenswrapper[4888]: I1006 15:13:38.398158 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" event={"ID":"f5eae805-9f70-47d7-b612-6ee46380d5a4","Type":"ContainerDied","Data":"30ab75a39421e41bbe1e174b760db81cbf86190fb69b6ed41200a6f13dc53ca8"} Oct 06 15:13:38 crc kubenswrapper[4888]: I1006 15:13:38.398184 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" event={"ID":"f5eae805-9f70-47d7-b612-6ee46380d5a4","Type":"ContainerStarted","Data":"995d934827507dd887ab0807f85b4e05481752c30b783c6a40df13e1d5ca33a7"} Oct 06 15:13:39 crc kubenswrapper[4888]: I1006 15:13:39.406038 4888 generic.go:334] "Generic (PLEG): container finished" podID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerID="fe2874bc9ce3022c7a888be876112078783dca5627b530679958ee293b1073ff" exitCode=0 Oct 06 15:13:39 crc kubenswrapper[4888]: I1006 15:13:39.406137 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" event={"ID":"f5eae805-9f70-47d7-b612-6ee46380d5a4","Type":"ContainerDied","Data":"fe2874bc9ce3022c7a888be876112078783dca5627b530679958ee293b1073ff"} Oct 06 15:13:40 crc kubenswrapper[4888]: I1006 15:13:40.414439 4888 generic.go:334] "Generic (PLEG): container finished" podID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerID="ed07fcd9514b63706c5827186eb684a4e831e9118849355652097d54c246169a" exitCode=0 Oct 06 15:13:40 crc kubenswrapper[4888]: I1006 15:13:40.414492 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" event={"ID":"f5eae805-9f70-47d7-b612-6ee46380d5a4","Type":"ContainerDied","Data":"ed07fcd9514b63706c5827186eb684a4e831e9118849355652097d54c246169a"} Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.702581 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.713545 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-util\") pod \"f5eae805-9f70-47d7-b612-6ee46380d5a4\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.713600 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtqwf\" (UniqueName: \"kubernetes.io/projected/f5eae805-9f70-47d7-b612-6ee46380d5a4-kube-api-access-gtqwf\") pod \"f5eae805-9f70-47d7-b612-6ee46380d5a4\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.713670 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-bundle\") pod \"f5eae805-9f70-47d7-b612-6ee46380d5a4\" (UID: \"f5eae805-9f70-47d7-b612-6ee46380d5a4\") " Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.714507 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-bundle" (OuterVolumeSpecName: "bundle") pod "f5eae805-9f70-47d7-b612-6ee46380d5a4" (UID: "f5eae805-9f70-47d7-b612-6ee46380d5a4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.722090 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5eae805-9f70-47d7-b612-6ee46380d5a4-kube-api-access-gtqwf" (OuterVolumeSpecName: "kube-api-access-gtqwf") pod "f5eae805-9f70-47d7-b612-6ee46380d5a4" (UID: "f5eae805-9f70-47d7-b612-6ee46380d5a4"). InnerVolumeSpecName "kube-api-access-gtqwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.732001 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-util" (OuterVolumeSpecName: "util") pod "f5eae805-9f70-47d7-b612-6ee46380d5a4" (UID: "f5eae805-9f70-47d7-b612-6ee46380d5a4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.816179 4888 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-util\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.816228 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtqwf\" (UniqueName: \"kubernetes.io/projected/f5eae805-9f70-47d7-b612-6ee46380d5a4-kube-api-access-gtqwf\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:41 crc kubenswrapper[4888]: I1006 15:13:41.816271 4888 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5eae805-9f70-47d7-b612-6ee46380d5a4-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:13:42 crc kubenswrapper[4888]: I1006 15:13:42.426509 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" event={"ID":"f5eae805-9f70-47d7-b612-6ee46380d5a4","Type":"ContainerDied","Data":"995d934827507dd887ab0807f85b4e05481752c30b783c6a40df13e1d5ca33a7"} Oct 06 15:13:42 crc kubenswrapper[4888]: I1006 15:13:42.426550 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="995d934827507dd887ab0807f85b4e05481752c30b783c6a40df13e1d5ca33a7" Oct 06 15:13:42 crc kubenswrapper[4888]: I1006 15:13:42.426582 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.527647 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r"] Oct 06 15:13:49 crc kubenswrapper[4888]: E1006 15:13:49.528444 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerName="util" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.528460 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerName="util" Oct 06 15:13:49 crc kubenswrapper[4888]: E1006 15:13:49.528490 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerName="pull" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.528497 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerName="pull" Oct 06 15:13:49 crc kubenswrapper[4888]: E1006 15:13:49.528509 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerName="extract" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.528516 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerName="extract" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.528652 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5eae805-9f70-47d7-b612-6ee46380d5a4" containerName="extract" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.529363 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.531157 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-c5mvn" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.567073 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r"] Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.724551 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrkh\" (UniqueName: \"kubernetes.io/projected/cc3cce66-3f1e-4348-8927-9a809f383102-kube-api-access-jqrkh\") pod \"openstack-operator-controller-operator-6bbd86684c-xs78r\" (UID: \"cc3cce66-3f1e-4348-8927-9a809f383102\") " pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.826108 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrkh\" (UniqueName: \"kubernetes.io/projected/cc3cce66-3f1e-4348-8927-9a809f383102-kube-api-access-jqrkh\") pod \"openstack-operator-controller-operator-6bbd86684c-xs78r\" (UID: \"cc3cce66-3f1e-4348-8927-9a809f383102\") " pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.846582 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrkh\" (UniqueName: \"kubernetes.io/projected/cc3cce66-3f1e-4348-8927-9a809f383102-kube-api-access-jqrkh\") pod \"openstack-operator-controller-operator-6bbd86684c-xs78r\" (UID: \"cc3cce66-3f1e-4348-8927-9a809f383102\") " pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" Oct 06 15:13:49 crc kubenswrapper[4888]: I1006 15:13:49.850210 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" Oct 06 15:13:50 crc kubenswrapper[4888]: I1006 15:13:50.290052 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r"] Oct 06 15:13:50 crc kubenswrapper[4888]: I1006 15:13:50.483617 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" event={"ID":"cc3cce66-3f1e-4348-8927-9a809f383102","Type":"ContainerStarted","Data":"ae882336ad1bf7c8193de9b3c681b488a7e04c5dbf3c27dc6896b68960a0e3dc"} Oct 06 15:13:50 crc kubenswrapper[4888]: I1006 15:13:50.835032 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8lkz"] Oct 06 15:13:50 crc kubenswrapper[4888]: I1006 15:13:50.836340 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:50 crc kubenswrapper[4888]: I1006 15:13:50.863515 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8lkz"] Oct 06 15:13:50 crc kubenswrapper[4888]: I1006 15:13:50.939302 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-catalog-content\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:50 crc kubenswrapper[4888]: I1006 15:13:50.939403 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-utilities\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:50 crc kubenswrapper[4888]: I1006 15:13:50.939432 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsm9k\" (UniqueName: \"kubernetes.io/projected/108db9f0-4278-4daa-a41b-3f501131871f-kube-api-access-lsm9k\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:51 crc kubenswrapper[4888]: I1006 15:13:51.040694 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-catalog-content\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:51 crc kubenswrapper[4888]: I1006 15:13:51.040826 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-utilities\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:51 crc kubenswrapper[4888]: I1006 15:13:51.041223 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsm9k\" (UniqueName: \"kubernetes.io/projected/108db9f0-4278-4daa-a41b-3f501131871f-kube-api-access-lsm9k\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:51 crc kubenswrapper[4888]: I1006 15:13:51.041281 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-utilities\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:51 crc kubenswrapper[4888]: I1006 15:13:51.041223 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-catalog-content\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:51 crc kubenswrapper[4888]: I1006 15:13:51.078356 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsm9k\" (UniqueName: \"kubernetes.io/projected/108db9f0-4278-4daa-a41b-3f501131871f-kube-api-access-lsm9k\") pod \"certified-operators-z8lkz\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:51 crc kubenswrapper[4888]: I1006 15:13:51.175480 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:13:51 crc kubenswrapper[4888]: I1006 15:13:51.752897 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8lkz"] Oct 06 15:13:52 crc kubenswrapper[4888]: I1006 15:13:52.503052 4888 generic.go:334] "Generic (PLEG): container finished" podID="108db9f0-4278-4daa-a41b-3f501131871f" containerID="9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9" exitCode=0 Oct 06 15:13:52 crc kubenswrapper[4888]: I1006 15:13:52.503336 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8lkz" event={"ID":"108db9f0-4278-4daa-a41b-3f501131871f","Type":"ContainerDied","Data":"9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9"} Oct 06 15:13:52 crc kubenswrapper[4888]: I1006 15:13:52.503363 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8lkz" event={"ID":"108db9f0-4278-4daa-a41b-3f501131871f","Type":"ContainerStarted","Data":"8e882ae1c44125668bde34c02545abff988bdfd703bd0d406f4445fd59bc3ec0"} Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.030632 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-99wb4"] Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.032242 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.048403 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99wb4"] Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.127369 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-catalog-content\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.127451 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-utilities\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.127494 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvx89\" (UniqueName: \"kubernetes.io/projected/89fa0c5f-3bde-440e-bfc4-7532562662e5-kube-api-access-qvx89\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.228997 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-catalog-content\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.229064 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-utilities\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.229103 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvx89\" (UniqueName: \"kubernetes.io/projected/89fa0c5f-3bde-440e-bfc4-7532562662e5-kube-api-access-qvx89\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.229726 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-utilities\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.229750 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-catalog-content\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.252429 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvx89\" (UniqueName: \"kubernetes.io/projected/89fa0c5f-3bde-440e-bfc4-7532562662e5-kube-api-access-qvx89\") pod \"redhat-operators-99wb4\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.351899 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.541251 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" event={"ID":"cc3cce66-3f1e-4348-8927-9a809f383102","Type":"ContainerStarted","Data":"ebbb313c74d49c60d9bbb1cd426b529a0206988684c6bd83f7e854fb319f6977"} Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.547313 4888 generic.go:334] "Generic (PLEG): container finished" podID="108db9f0-4278-4daa-a41b-3f501131871f" containerID="d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09" exitCode=0 Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.547368 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8lkz" event={"ID":"108db9f0-4278-4daa-a41b-3f501131871f","Type":"ContainerDied","Data":"d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09"} Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.822062 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99wb4"] Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.834238 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gsx62"] Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.835535 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.853387 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsx62"] Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.940960 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-utilities\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.941009 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgmfw\" (UniqueName: \"kubernetes.io/projected/f28c1d05-4c12-435b-9b8b-c4c0ca621630-kube-api-access-vgmfw\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:56 crc kubenswrapper[4888]: I1006 15:13:56.941044 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-catalog-content\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:57 crc kubenswrapper[4888]: I1006 15:13:57.041776 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-utilities\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:57 crc kubenswrapper[4888]: I1006 15:13:57.041854 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgmfw\" (UniqueName: \"kubernetes.io/projected/f28c1d05-4c12-435b-9b8b-c4c0ca621630-kube-api-access-vgmfw\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:57 crc kubenswrapper[4888]: I1006 15:13:57.041902 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-catalog-content\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:57 crc kubenswrapper[4888]: I1006 15:13:57.042349 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-utilities\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:57 crc kubenswrapper[4888]: I1006 15:13:57.042895 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-catalog-content\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:57 crc kubenswrapper[4888]: I1006 15:13:57.069942 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgmfw\" (UniqueName: \"kubernetes.io/projected/f28c1d05-4c12-435b-9b8b-c4c0ca621630-kube-api-access-vgmfw\") pod \"redhat-marketplace-gsx62\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:57 crc kubenswrapper[4888]: I1006 15:13:57.168062 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:13:57 crc kubenswrapper[4888]: I1006 15:13:57.563212 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99wb4" event={"ID":"89fa0c5f-3bde-440e-bfc4-7532562662e5","Type":"ContainerStarted","Data":"1d8179c7765018d46e8c4c6bd210fbd8eae7c7235c632d58c1cf8116342782f5"} Oct 06 15:13:58 crc kubenswrapper[4888]: I1006 15:13:58.353341 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsx62"] Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.576602 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" event={"ID":"cc3cce66-3f1e-4348-8927-9a809f383102","Type":"ContainerStarted","Data":"3ebfc45695b0774e9d8d16705995a666e88f402415630296d356136093b846e2"} Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.577119 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.578212 4888 generic.go:334] "Generic (PLEG): container finished" podID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerID="b4dc503c0016b9e4bec8c2ab7b36bb51ad480a4d6b15d037490b6842eafd98a6" exitCode=0 Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.578255 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsx62" event={"ID":"f28c1d05-4c12-435b-9b8b-c4c0ca621630","Type":"ContainerDied","Data":"b4dc503c0016b9e4bec8c2ab7b36bb51ad480a4d6b15d037490b6842eafd98a6"} Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.578273 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsx62" event={"ID":"f28c1d05-4c12-435b-9b8b-c4c0ca621630","Type":"ContainerStarted","Data":"1189f3039ab9ccb1de619c856d08e7cd51458f405196781260e225f4d1696832"} Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.580976 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8lkz" event={"ID":"108db9f0-4278-4daa-a41b-3f501131871f","Type":"ContainerStarted","Data":"c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb"} Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.585429 4888 generic.go:334] "Generic (PLEG): container finished" podID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerID="93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2" exitCode=0 Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.585484 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99wb4" event={"ID":"89fa0c5f-3bde-440e-bfc4-7532562662e5","Type":"ContainerDied","Data":"93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2"} Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.620436 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" podStartSLOduration=2.305016409 podStartE2EDuration="10.620412746s" podCreationTimestamp="2025-10-06 15:13:49 +0000 UTC" firstStartedPulling="2025-10-06 15:13:50.31110487 +0000 UTC m=+770.123455588" lastFinishedPulling="2025-10-06 15:13:58.626501207 +0000 UTC m=+778.438851925" observedRunningTime="2025-10-06 15:13:59.615946486 +0000 UTC m=+779.428297224" watchObservedRunningTime="2025-10-06 15:13:59.620412746 +0000 UTC m=+779.432763474" Oct 06 15:13:59 crc kubenswrapper[4888]: I1006 15:13:59.678838 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8lkz" podStartSLOduration=4.643430653 podStartE2EDuration="9.67881959s" podCreationTimestamp="2025-10-06 15:13:50 +0000 UTC" firstStartedPulling="2025-10-06 15:13:53.536534507 +0000 UTC m=+773.348885225" lastFinishedPulling="2025-10-06 15:13:58.571923444 +0000 UTC m=+778.384274162" observedRunningTime="2025-10-06 15:13:59.676078334 +0000 UTC m=+779.488429052" watchObservedRunningTime="2025-10-06 15:13:59.67881959 +0000 UTC m=+779.491170308" Oct 06 15:14:00 crc kubenswrapper[4888]: I1006 15:14:00.593528 4888 generic.go:334] "Generic (PLEG): container finished" podID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerID="622361a102cd6e954723fc34426a73cdc684a044f871895b4d72a55b5b613832" exitCode=0 Oct 06 15:14:00 crc kubenswrapper[4888]: I1006 15:14:00.593609 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsx62" event={"ID":"f28c1d05-4c12-435b-9b8b-c4c0ca621630","Type":"ContainerDied","Data":"622361a102cd6e954723fc34426a73cdc684a044f871895b4d72a55b5b613832"} Oct 06 15:14:00 crc kubenswrapper[4888]: I1006 15:14:00.597808 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6bbd86684c-xs78r" Oct 06 15:14:01 crc kubenswrapper[4888]: I1006 15:14:01.177423 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:14:01 crc kubenswrapper[4888]: I1006 15:14:01.177482 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:14:01 crc kubenswrapper[4888]: I1006 15:14:01.301270 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:14:01 crc kubenswrapper[4888]: I1006 15:14:01.601520 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsx62" event={"ID":"f28c1d05-4c12-435b-9b8b-c4c0ca621630","Type":"ContainerStarted","Data":"d9c018048e674b7207754a2240f3b038a6e6b01e213c573bee404c5788b72ce8"} Oct 06 15:14:01 crc kubenswrapper[4888]: I1006 15:14:01.603913 4888 generic.go:334] "Generic (PLEG): container finished" podID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerID="f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a" exitCode=0 Oct 06 15:14:01 crc kubenswrapper[4888]: I1006 15:14:01.604927 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99wb4" event={"ID":"89fa0c5f-3bde-440e-bfc4-7532562662e5","Type":"ContainerDied","Data":"f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a"} Oct 06 15:14:01 crc kubenswrapper[4888]: I1006 15:14:01.619223 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gsx62" podStartSLOduration=3.9487874830000003 podStartE2EDuration="5.61920447s" podCreationTimestamp="2025-10-06 15:13:56 +0000 UTC" firstStartedPulling="2025-10-06 15:13:59.579566612 +0000 UTC m=+779.391917330" lastFinishedPulling="2025-10-06 15:14:01.249983599 +0000 UTC m=+781.062334317" observedRunningTime="2025-10-06 15:14:01.617188287 +0000 UTC m=+781.429539005" watchObservedRunningTime="2025-10-06 15:14:01.61920447 +0000 UTC m=+781.431555188" Oct 06 15:14:03 crc kubenswrapper[4888]: I1006 15:14:03.623407 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99wb4" event={"ID":"89fa0c5f-3bde-440e-bfc4-7532562662e5","Type":"ContainerStarted","Data":"966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f"} Oct 06 15:14:03 crc kubenswrapper[4888]: I1006 15:14:03.644713 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-99wb4" podStartSLOduration=4.413146416 podStartE2EDuration="7.644695049s" podCreationTimestamp="2025-10-06 15:13:56 +0000 UTC" firstStartedPulling="2025-10-06 15:13:59.586767279 +0000 UTC m=+779.399117997" lastFinishedPulling="2025-10-06 15:14:02.818315912 +0000 UTC m=+782.630666630" observedRunningTime="2025-10-06 15:14:03.640554789 +0000 UTC m=+783.452905527" watchObservedRunningTime="2025-10-06 15:14:03.644695049 +0000 UTC m=+783.457045767" Oct 06 15:14:06 crc kubenswrapper[4888]: I1006 15:14:06.352964 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:14:06 crc kubenswrapper[4888]: I1006 15:14:06.353515 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:14:07 crc kubenswrapper[4888]: I1006 15:14:07.169038 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:14:07 crc kubenswrapper[4888]: I1006 15:14:07.169510 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:14:07 crc kubenswrapper[4888]: I1006 15:14:07.210677 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:14:07 crc kubenswrapper[4888]: I1006 15:14:07.393424 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-99wb4" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="registry-server" probeResult="failure" output=< Oct 06 15:14:07 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:14:07 crc kubenswrapper[4888]: > Oct 06 15:14:07 crc kubenswrapper[4888]: I1006 15:14:07.703154 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.215652 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.632885 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z9h9n"] Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.633996 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.645135 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9h9n"] Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.741988 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-utilities\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.742279 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbctp\" (UniqueName: \"kubernetes.io/projected/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-kube-api-access-kbctp\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.742342 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-catalog-content\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.843917 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-utilities\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.843970 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbctp\" (UniqueName: \"kubernetes.io/projected/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-kube-api-access-kbctp\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.844036 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-catalog-content\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.844555 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-utilities\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.844614 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-catalog-content\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.861468 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbctp\" (UniqueName: \"kubernetes.io/projected/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-kube-api-access-kbctp\") pod \"community-operators-z9h9n\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:11 crc kubenswrapper[4888]: I1006 15:14:11.965112 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.422232 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsx62"] Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.422834 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gsx62" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerName="registry-server" containerID="cri-o://d9c018048e674b7207754a2240f3b038a6e6b01e213c573bee404c5788b72ce8" gracePeriod=2 Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.474127 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9h9n"] Oct 06 15:14:12 crc kubenswrapper[4888]: W1006 15:14:12.480433 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod306e0de1_6cfa_45ab_ade4_81f1e26bb57b.slice/crio-39d5c0822c5ad0e17459db825ca6a5a2265fe145eef60f93d6ab57f86247c2a1 WatchSource:0}: Error finding container 39d5c0822c5ad0e17459db825ca6a5a2265fe145eef60f93d6ab57f86247c2a1: Status 404 returned error can't find the container with id 39d5c0822c5ad0e17459db825ca6a5a2265fe145eef60f93d6ab57f86247c2a1 Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.680586 4888 generic.go:334] "Generic (PLEG): container finished" podID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerID="d9c018048e674b7207754a2240f3b038a6e6b01e213c573bee404c5788b72ce8" exitCode=0 Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.680664 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsx62" event={"ID":"f28c1d05-4c12-435b-9b8b-c4c0ca621630","Type":"ContainerDied","Data":"d9c018048e674b7207754a2240f3b038a6e6b01e213c573bee404c5788b72ce8"} Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.682424 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9h9n" event={"ID":"306e0de1-6cfa-45ab-ade4-81f1e26bb57b","Type":"ContainerStarted","Data":"39d5c0822c5ad0e17459db825ca6a5a2265fe145eef60f93d6ab57f86247c2a1"} Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.806972 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.972734 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-catalog-content\") pod \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.972775 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgmfw\" (UniqueName: \"kubernetes.io/projected/f28c1d05-4c12-435b-9b8b-c4c0ca621630-kube-api-access-vgmfw\") pod \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.972852 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-utilities\") pod \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\" (UID: \"f28c1d05-4c12-435b-9b8b-c4c0ca621630\") " Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.973664 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-utilities" (OuterVolumeSpecName: "utilities") pod "f28c1d05-4c12-435b-9b8b-c4c0ca621630" (UID: "f28c1d05-4c12-435b-9b8b-c4c0ca621630"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.978905 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28c1d05-4c12-435b-9b8b-c4c0ca621630-kube-api-access-vgmfw" (OuterVolumeSpecName: "kube-api-access-vgmfw") pod "f28c1d05-4c12-435b-9b8b-c4c0ca621630" (UID: "f28c1d05-4c12-435b-9b8b-c4c0ca621630"). InnerVolumeSpecName "kube-api-access-vgmfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:14:12 crc kubenswrapper[4888]: I1006 15:14:12.986053 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f28c1d05-4c12-435b-9b8b-c4c0ca621630" (UID: "f28c1d05-4c12-435b-9b8b-c4c0ca621630"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.075409 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.075450 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f28c1d05-4c12-435b-9b8b-c4c0ca621630-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.075464 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgmfw\" (UniqueName: \"kubernetes.io/projected/f28c1d05-4c12-435b-9b8b-c4c0ca621630-kube-api-access-vgmfw\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.690592 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsx62" Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.690596 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsx62" event={"ID":"f28c1d05-4c12-435b-9b8b-c4c0ca621630","Type":"ContainerDied","Data":"1189f3039ab9ccb1de619c856d08e7cd51458f405196781260e225f4d1696832"} Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.690935 4888 scope.go:117] "RemoveContainer" containerID="d9c018048e674b7207754a2240f3b038a6e6b01e213c573bee404c5788b72ce8" Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.692501 4888 generic.go:334] "Generic (PLEG): container finished" podID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerID="eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba" exitCode=0 Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.692536 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9h9n" event={"ID":"306e0de1-6cfa-45ab-ade4-81f1e26bb57b","Type":"ContainerDied","Data":"eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba"} Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.721729 4888 scope.go:117] "RemoveContainer" containerID="622361a102cd6e954723fc34426a73cdc684a044f871895b4d72a55b5b613832" Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.745237 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsx62"] Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.751479 4888 scope.go:117] "RemoveContainer" containerID="b4dc503c0016b9e4bec8c2ab7b36bb51ad480a4d6b15d037490b6842eafd98a6" Oct 06 15:14:13 crc kubenswrapper[4888]: I1006 15:14:13.755651 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsx62"] Oct 06 15:14:14 crc kubenswrapper[4888]: I1006 15:14:14.702154 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9h9n" event={"ID":"306e0de1-6cfa-45ab-ade4-81f1e26bb57b","Type":"ContainerStarted","Data":"07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c"} Oct 06 15:14:14 crc kubenswrapper[4888]: I1006 15:14:14.930895 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" path="/var/lib/kubelet/pods/f28c1d05-4c12-435b-9b8b-c4c0ca621630/volumes" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.222680 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8lkz"] Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.222974 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z8lkz" podUID="108db9f0-4278-4daa-a41b-3f501131871f" containerName="registry-server" containerID="cri-o://c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb" gracePeriod=2 Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.569281 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.710788 4888 generic.go:334] "Generic (PLEG): container finished" podID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerID="07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c" exitCode=0 Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.712523 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9h9n" event={"ID":"306e0de1-6cfa-45ab-ade4-81f1e26bb57b","Type":"ContainerDied","Data":"07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c"} Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.713165 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-utilities\") pod \"108db9f0-4278-4daa-a41b-3f501131871f\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.713774 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsm9k\" (UniqueName: \"kubernetes.io/projected/108db9f0-4278-4daa-a41b-3f501131871f-kube-api-access-lsm9k\") pod \"108db9f0-4278-4daa-a41b-3f501131871f\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.713832 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-catalog-content\") pod \"108db9f0-4278-4daa-a41b-3f501131871f\" (UID: \"108db9f0-4278-4daa-a41b-3f501131871f\") " Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.714273 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-utilities" (OuterVolumeSpecName: "utilities") pod "108db9f0-4278-4daa-a41b-3f501131871f" (UID: "108db9f0-4278-4daa-a41b-3f501131871f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.723694 4888 generic.go:334] "Generic (PLEG): container finished" podID="108db9f0-4278-4daa-a41b-3f501131871f" containerID="c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb" exitCode=0 Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.723757 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8lkz" event={"ID":"108db9f0-4278-4daa-a41b-3f501131871f","Type":"ContainerDied","Data":"c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb"} Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.723807 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8lkz" event={"ID":"108db9f0-4278-4daa-a41b-3f501131871f","Type":"ContainerDied","Data":"8e882ae1c44125668bde34c02545abff988bdfd703bd0d406f4445fd59bc3ec0"} Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.723834 4888 scope.go:117] "RemoveContainer" containerID="c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.724317 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8lkz" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.741142 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108db9f0-4278-4daa-a41b-3f501131871f-kube-api-access-lsm9k" (OuterVolumeSpecName: "kube-api-access-lsm9k") pod "108db9f0-4278-4daa-a41b-3f501131871f" (UID: "108db9f0-4278-4daa-a41b-3f501131871f"). InnerVolumeSpecName "kube-api-access-lsm9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.764423 4888 scope.go:117] "RemoveContainer" containerID="d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.782594 4888 scope.go:117] "RemoveContainer" containerID="9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.788129 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "108db9f0-4278-4daa-a41b-3f501131871f" (UID: "108db9f0-4278-4daa-a41b-3f501131871f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.803683 4888 scope.go:117] "RemoveContainer" containerID="c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb" Oct 06 15:14:15 crc kubenswrapper[4888]: E1006 15:14:15.804480 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb\": container with ID starting with c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb not found: ID does not exist" containerID="c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.804526 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb"} err="failed to get container status \"c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb\": rpc error: code = NotFound desc = could not find container \"c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb\": container with ID starting with c5e94c4e7810851d5e009ccbf81ecbc69c6475ddb4495e50dec95832862253cb not found: ID does not exist" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.804556 4888 scope.go:117] "RemoveContainer" containerID="d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09" Oct 06 15:14:15 crc kubenswrapper[4888]: E1006 15:14:15.804938 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09\": container with ID starting with d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09 not found: ID does not exist" containerID="d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.804961 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09"} err="failed to get container status \"d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09\": rpc error: code = NotFound desc = could not find container \"d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09\": container with ID starting with d79cd055c4db0fd56845d8e7c081922c99d28aad11c8f042a9bcca4ba8907d09 not found: ID does not exist" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.804977 4888 scope.go:117] "RemoveContainer" containerID="9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9" Oct 06 15:14:15 crc kubenswrapper[4888]: E1006 15:14:15.805232 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9\": container with ID starting with 9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9 not found: ID does not exist" containerID="9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.805258 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9"} err="failed to get container status \"9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9\": rpc error: code = NotFound desc = could not find container \"9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9\": container with ID starting with 9be9191c304c92e87dc4b5864855e2b5304ea359198cd00689ed1bef5bbc2ca9 not found: ID does not exist" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.815758 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.815791 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108db9f0-4278-4daa-a41b-3f501131871f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:15 crc kubenswrapper[4888]: I1006 15:14:15.815818 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsm9k\" (UniqueName: \"kubernetes.io/projected/108db9f0-4278-4daa-a41b-3f501131871f-kube-api-access-lsm9k\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:16 crc kubenswrapper[4888]: I1006 15:14:16.049949 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8lkz"] Oct 06 15:14:16 crc kubenswrapper[4888]: I1006 15:14:16.056049 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z8lkz"] Oct 06 15:14:16 crc kubenswrapper[4888]: I1006 15:14:16.397183 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:14:16 crc kubenswrapper[4888]: I1006 15:14:16.442090 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:14:16 crc kubenswrapper[4888]: I1006 15:14:16.740441 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9h9n" event={"ID":"306e0de1-6cfa-45ab-ade4-81f1e26bb57b","Type":"ContainerStarted","Data":"7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d"} Oct 06 15:14:16 crc kubenswrapper[4888]: I1006 15:14:16.758881 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z9h9n" podStartSLOduration=3.361050618 podStartE2EDuration="5.75885978s" podCreationTimestamp="2025-10-06 15:14:11 +0000 UTC" firstStartedPulling="2025-10-06 15:14:13.695962946 +0000 UTC m=+793.508313664" lastFinishedPulling="2025-10-06 15:14:16.093772108 +0000 UTC m=+795.906122826" observedRunningTime="2025-10-06 15:14:16.757664103 +0000 UTC m=+796.570014831" watchObservedRunningTime="2025-10-06 15:14:16.75885978 +0000 UTC m=+796.571210498" Oct 06 15:14:16 crc kubenswrapper[4888]: I1006 15:14:16.928272 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108db9f0-4278-4daa-a41b-3f501131871f" path="/var/lib/kubelet/pods/108db9f0-4278-4daa-a41b-3f501131871f/volumes" Oct 06 15:14:21 crc kubenswrapper[4888]: I1006 15:14:21.966246 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:21 crc kubenswrapper[4888]: I1006 15:14:21.966605 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.038898 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99wb4"] Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.039205 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-99wb4" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="registry-server" containerID="cri-o://966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f" gracePeriod=2 Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.049193 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.559564 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.700140 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvx89\" (UniqueName: \"kubernetes.io/projected/89fa0c5f-3bde-440e-bfc4-7532562662e5-kube-api-access-qvx89\") pod \"89fa0c5f-3bde-440e-bfc4-7532562662e5\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.700233 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-catalog-content\") pod \"89fa0c5f-3bde-440e-bfc4-7532562662e5\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.700253 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-utilities\") pod \"89fa0c5f-3bde-440e-bfc4-7532562662e5\" (UID: \"89fa0c5f-3bde-440e-bfc4-7532562662e5\") " Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.701008 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-utilities" (OuterVolumeSpecName: "utilities") pod "89fa0c5f-3bde-440e-bfc4-7532562662e5" (UID: "89fa0c5f-3bde-440e-bfc4-7532562662e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.716382 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fa0c5f-3bde-440e-bfc4-7532562662e5-kube-api-access-qvx89" (OuterVolumeSpecName: "kube-api-access-qvx89") pod "89fa0c5f-3bde-440e-bfc4-7532562662e5" (UID: "89fa0c5f-3bde-440e-bfc4-7532562662e5"). InnerVolumeSpecName "kube-api-access-qvx89". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.777399 4888 generic.go:334] "Generic (PLEG): container finished" podID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerID="966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f" exitCode=0 Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.777458 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99wb4" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.777521 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99wb4" event={"ID":"89fa0c5f-3bde-440e-bfc4-7532562662e5","Type":"ContainerDied","Data":"966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f"} Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.777545 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99wb4" event={"ID":"89fa0c5f-3bde-440e-bfc4-7532562662e5","Type":"ContainerDied","Data":"1d8179c7765018d46e8c4c6bd210fbd8eae7c7235c632d58c1cf8116342782f5"} Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.777561 4888 scope.go:117] "RemoveContainer" containerID="966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.806574 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvx89\" (UniqueName: \"kubernetes.io/projected/89fa0c5f-3bde-440e-bfc4-7532562662e5-kube-api-access-qvx89\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.806613 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.817027 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89fa0c5f-3bde-440e-bfc4-7532562662e5" (UID: "89fa0c5f-3bde-440e-bfc4-7532562662e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.845274 4888 scope.go:117] "RemoveContainer" containerID="f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.877305 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.900961 4888 scope.go:117] "RemoveContainer" containerID="93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.907602 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89fa0c5f-3bde-440e-bfc4-7532562662e5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.919658 4888 scope.go:117] "RemoveContainer" containerID="966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f" Oct 06 15:14:22 crc kubenswrapper[4888]: E1006 15:14:22.921624 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f\": container with ID starting with 966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f not found: ID does not exist" containerID="966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.921667 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f"} err="failed to get container status \"966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f\": rpc error: code = NotFound desc = could not find container \"966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f\": container with ID starting with 966fe9007b5bf2faeeaa0da7d5a137a9eaa643f073c7a0e1550cc87b6814a46f not found: ID does not exist" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.921692 4888 scope.go:117] "RemoveContainer" containerID="f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a" Oct 06 15:14:22 crc kubenswrapper[4888]: E1006 15:14:22.921983 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a\": container with ID starting with f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a not found: ID does not exist" containerID="f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.922005 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a"} err="failed to get container status \"f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a\": rpc error: code = NotFound desc = could not find container \"f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a\": container with ID starting with f7a4dbf1faac5e2e5b5dd876583631aaa0f18eb8c12e01f353ac9d4646d9b67a not found: ID does not exist" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.922018 4888 scope.go:117] "RemoveContainer" containerID="93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2" Oct 06 15:14:22 crc kubenswrapper[4888]: E1006 15:14:22.927711 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2\": container with ID starting with 93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2 not found: ID does not exist" containerID="93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2" Oct 06 15:14:22 crc kubenswrapper[4888]: I1006 15:14:22.927752 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2"} err="failed to get container status \"93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2\": rpc error: code = NotFound desc = could not find container \"93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2\": container with ID starting with 93c3f639976ea063d14e847c0997bcaae33fbdbf01c94f787ce677d94a6024f2 not found: ID does not exist" Oct 06 15:14:23 crc kubenswrapper[4888]: I1006 15:14:23.128029 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99wb4"] Oct 06 15:14:23 crc kubenswrapper[4888]: I1006 15:14:23.149650 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-99wb4"] Oct 06 15:14:24 crc kubenswrapper[4888]: I1006 15:14:24.929564 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" path="/var/lib/kubelet/pods/89fa0c5f-3bde-440e-bfc4-7532562662e5/volumes" Oct 06 15:14:27 crc kubenswrapper[4888]: I1006 15:14:27.821420 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9h9n"] Oct 06 15:14:27 crc kubenswrapper[4888]: I1006 15:14:27.821996 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z9h9n" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerName="registry-server" containerID="cri-o://7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d" gracePeriod=2 Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.292125 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.475527 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbctp\" (UniqueName: \"kubernetes.io/projected/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-kube-api-access-kbctp\") pod \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.475603 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-utilities\") pod \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.475729 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-catalog-content\") pod \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\" (UID: \"306e0de1-6cfa-45ab-ade4-81f1e26bb57b\") " Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.477183 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-utilities" (OuterVolumeSpecName: "utilities") pod "306e0de1-6cfa-45ab-ade4-81f1e26bb57b" (UID: "306e0de1-6cfa-45ab-ade4-81f1e26bb57b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.495753 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-kube-api-access-kbctp" (OuterVolumeSpecName: "kube-api-access-kbctp") pod "306e0de1-6cfa-45ab-ade4-81f1e26bb57b" (UID: "306e0de1-6cfa-45ab-ade4-81f1e26bb57b"). InnerVolumeSpecName "kube-api-access-kbctp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.527113 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "306e0de1-6cfa-45ab-ade4-81f1e26bb57b" (UID: "306e0de1-6cfa-45ab-ade4-81f1e26bb57b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.576936 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbctp\" (UniqueName: \"kubernetes.io/projected/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-kube-api-access-kbctp\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.576964 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.576973 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/306e0de1-6cfa-45ab-ade4-81f1e26bb57b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.814261 4888 generic.go:334] "Generic (PLEG): container finished" podID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerID="7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d" exitCode=0 Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.814308 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9h9n" event={"ID":"306e0de1-6cfa-45ab-ade4-81f1e26bb57b","Type":"ContainerDied","Data":"7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d"} Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.814338 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9h9n" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.814354 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9h9n" event={"ID":"306e0de1-6cfa-45ab-ade4-81f1e26bb57b","Type":"ContainerDied","Data":"39d5c0822c5ad0e17459db825ca6a5a2265fe145eef60f93d6ab57f86247c2a1"} Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.814374 4888 scope.go:117] "RemoveContainer" containerID="7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.851202 4888 scope.go:117] "RemoveContainer" containerID="07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.855213 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9h9n"] Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.860696 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z9h9n"] Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.889510 4888 scope.go:117] "RemoveContainer" containerID="eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.908209 4888 scope.go:117] "RemoveContainer" containerID="7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d" Oct 06 15:14:28 crc kubenswrapper[4888]: E1006 15:14:28.911329 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d\": container with ID starting with 7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d not found: ID does not exist" containerID="7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.911381 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d"} err="failed to get container status \"7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d\": rpc error: code = NotFound desc = could not find container \"7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d\": container with ID starting with 7e07912e41ccb76eff71ddd87e586853d87e3fed46ac0983ad254ca577ef2d1d not found: ID does not exist" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.911404 4888 scope.go:117] "RemoveContainer" containerID="07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c" Oct 06 15:14:28 crc kubenswrapper[4888]: E1006 15:14:28.912094 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c\": container with ID starting with 07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c not found: ID does not exist" containerID="07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.912148 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c"} err="failed to get container status \"07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c\": rpc error: code = NotFound desc = could not find container \"07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c\": container with ID starting with 07e001e987f47b21502c43ae7418b2124b8b51b875fbbffc2004349a4d30e79c not found: ID does not exist" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.912187 4888 scope.go:117] "RemoveContainer" containerID="eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba" Oct 06 15:14:28 crc kubenswrapper[4888]: E1006 15:14:28.912540 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba\": container with ID starting with eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba not found: ID does not exist" containerID="eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.912573 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba"} err="failed to get container status \"eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba\": rpc error: code = NotFound desc = could not find container \"eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba\": container with ID starting with eb8e430c07f2e4c741184b7482cf09370bd7c2957959fe5f25d01e1b53bc4aba not found: ID does not exist" Oct 06 15:14:28 crc kubenswrapper[4888]: I1006 15:14:28.929842 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" path="/var/lib/kubelet/pods/306e0de1-6cfa-45ab-ade4-81f1e26bb57b/volumes" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329048 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z"] Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329739 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108db9f0-4278-4daa-a41b-3f501131871f" containerName="extract-content" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329751 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="108db9f0-4278-4daa-a41b-3f501131871f" containerName="extract-content" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329762 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerName="extract-content" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329768 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerName="extract-content" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329778 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="extract-utilities" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329784 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="extract-utilities" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329805 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerName="extract-utilities" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329811 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerName="extract-utilities" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329820 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329827 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329836 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="extract-content" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329842 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="extract-content" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329855 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108db9f0-4278-4daa-a41b-3f501131871f" containerName="extract-utilities" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329861 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="108db9f0-4278-4daa-a41b-3f501131871f" containerName="extract-utilities" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329869 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329875 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329884 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerName="extract-utilities" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329890 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerName="extract-utilities" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329898 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108db9f0-4278-4daa-a41b-3f501131871f" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329904 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="108db9f0-4278-4daa-a41b-3f501131871f" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329916 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329922 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.329932 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerName="extract-content" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.329938 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerName="extract-content" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.330032 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28c1d05-4c12-435b-9b8b-c4c0ca621630" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.330040 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="306e0de1-6cfa-45ab-ade4-81f1e26bb57b" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.330049 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="108db9f0-4278-4daa-a41b-3f501131871f" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.330058 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fa0c5f-3bde-440e-bfc4-7532562662e5" containerName="registry-server" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.330619 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.332771 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6qj4j" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.344595 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.345844 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.350006 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zbbtw" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.363340 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.382872 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.382973 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.385293 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.391653 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.400981 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-cb9p2" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.428838 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.430904 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.435116 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2vdvh" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.480015 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-nn686"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.480988 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.487864 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rd26q" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.497765 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.499203 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.505752 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ct5zx" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.510540 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthdp\" (UniqueName: \"kubernetes.io/projected/a3e1786a-e54d-4a41-a974-fab79300a4b9-kube-api-access-mthdp\") pod \"barbican-operator-controller-manager-58c4cd55f4-kzz5z\" (UID: \"a3e1786a-e54d-4a41-a974-fab79300a4b9\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.510616 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8kw\" (UniqueName: \"kubernetes.io/projected/4871270f-8f10-42b8-880a-7ff6ab0d1476-kube-api-access-5d8kw\") pod \"cinder-operator-controller-manager-7d4d4f8d-xwmzg\" (UID: \"4871270f-8f10-42b8-880a-7ff6ab0d1476\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.510643 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqrw\" (UniqueName: \"kubernetes.io/projected/919edfbf-4b21-44cb-b821-3d3294a2beb1-kube-api-access-xhqrw\") pod \"designate-operator-controller-manager-75dfd9b554-wdknr\" (UID: \"919edfbf-4b21-44cb-b821-3d3294a2beb1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.519218 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.532010 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.552886 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-nn686"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.567063 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.568144 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.575018 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nrkd9" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.575237 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.575831 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-jxh84"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.577021 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.588287 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wl9dq" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.611408 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg4g9\" (UniqueName: \"kubernetes.io/projected/ba1d4452-d22b-4724-93eb-bf70500f2040-kube-api-access-cg4g9\") pod \"horizon-operator-controller-manager-76d5b87f47-gw6sx\" (UID: \"ba1d4452-d22b-4724-93eb-bf70500f2040\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.611467 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q45sd\" (UniqueName: \"kubernetes.io/projected/18e5b74c-d61b-4916-a059-5daa7e2b6277-kube-api-access-q45sd\") pod \"glance-operator-controller-manager-5dc44df7d5-6tfdd\" (UID: \"18e5b74c-d61b-4916-a059-5daa7e2b6277\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.611498 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mthdp\" (UniqueName: \"kubernetes.io/projected/a3e1786a-e54d-4a41-a974-fab79300a4b9-kube-api-access-mthdp\") pod \"barbican-operator-controller-manager-58c4cd55f4-kzz5z\" (UID: \"a3e1786a-e54d-4a41-a974-fab79300a4b9\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.611542 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8kw\" (UniqueName: \"kubernetes.io/projected/4871270f-8f10-42b8-880a-7ff6ab0d1476-kube-api-access-5d8kw\") pod \"cinder-operator-controller-manager-7d4d4f8d-xwmzg\" (UID: \"4871270f-8f10-42b8-880a-7ff6ab0d1476\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.611560 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhqrw\" (UniqueName: \"kubernetes.io/projected/919edfbf-4b21-44cb-b821-3d3294a2beb1-kube-api-access-xhqrw\") pod \"designate-operator-controller-manager-75dfd9b554-wdknr\" (UID: \"919edfbf-4b21-44cb-b821-3d3294a2beb1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.611579 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq59b\" (UniqueName: \"kubernetes.io/projected/c951d569-cbb5-4525-b66d-07f2473db97a-kube-api-access-dq59b\") pod \"heat-operator-controller-manager-54b4974c45-nn686\" (UID: \"c951d569-cbb5-4525-b66d-07f2473db97a\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.617702 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-jxh84"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.650323 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.651390 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.660033 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6pxhf" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.664228 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.667823 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8kw\" (UniqueName: \"kubernetes.io/projected/4871270f-8f10-42b8-880a-7ff6ab0d1476-kube-api-access-5d8kw\") pod \"cinder-operator-controller-manager-7d4d4f8d-xwmzg\" (UID: \"4871270f-8f10-42b8-880a-7ff6ab0d1476\") " pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.700474 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthdp\" (UniqueName: \"kubernetes.io/projected/a3e1786a-e54d-4a41-a974-fab79300a4b9-kube-api-access-mthdp\") pod \"barbican-operator-controller-manager-58c4cd55f4-kzz5z\" (UID: \"a3e1786a-e54d-4a41-a974-fab79300a4b9\") " pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.714339 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhqrw\" (UniqueName: \"kubernetes.io/projected/919edfbf-4b21-44cb-b821-3d3294a2beb1-kube-api-access-xhqrw\") pod \"designate-operator-controller-manager-75dfd9b554-wdknr\" (UID: \"919edfbf-4b21-44cb-b821-3d3294a2beb1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.715012 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq59b\" (UniqueName: \"kubernetes.io/projected/c951d569-cbb5-4525-b66d-07f2473db97a-kube-api-access-dq59b\") pod \"heat-operator-controller-manager-54b4974c45-nn686\" (UID: \"c951d569-cbb5-4525-b66d-07f2473db97a\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.715049 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4cpr\" (UniqueName: \"kubernetes.io/projected/c7b475c9-9590-41e5-9bd4-d7f9fb04958c-kube-api-access-s4cpr\") pod \"ironic-operator-controller-manager-649675d675-jxh84\" (UID: \"c7b475c9-9590-41e5-9bd4-d7f9fb04958c\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.715087 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bsp5\" (UniqueName: \"kubernetes.io/projected/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-kube-api-access-9bsp5\") pod \"infra-operator-controller-manager-658588b8c9-nbssz\" (UID: \"7d3c45dc-4628-4b73-b123-dda0b0cb4d72\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.715116 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg4g9\" (UniqueName: \"kubernetes.io/projected/ba1d4452-d22b-4724-93eb-bf70500f2040-kube-api-access-cg4g9\") pod \"horizon-operator-controller-manager-76d5b87f47-gw6sx\" (UID: \"ba1d4452-d22b-4724-93eb-bf70500f2040\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.715147 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q45sd\" (UniqueName: \"kubernetes.io/projected/18e5b74c-d61b-4916-a059-5daa7e2b6277-kube-api-access-q45sd\") pod \"glance-operator-controller-manager-5dc44df7d5-6tfdd\" (UID: \"18e5b74c-d61b-4916-a059-5daa7e2b6277\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.715190 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert\") pod \"infra-operator-controller-manager-658588b8c9-nbssz\" (UID: \"7d3c45dc-4628-4b73-b123-dda0b0cb4d72\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.723933 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.727305 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.728222 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.740433 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg4g9\" (UniqueName: \"kubernetes.io/projected/ba1d4452-d22b-4724-93eb-bf70500f2040-kube-api-access-cg4g9\") pod \"horizon-operator-controller-manager-76d5b87f47-gw6sx\" (UID: \"ba1d4452-d22b-4724-93eb-bf70500f2040\") " pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.745714 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.747613 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.767855 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kgc6c" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.768443 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.768744 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-8h6f6" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.771101 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q45sd\" (UniqueName: \"kubernetes.io/projected/18e5b74c-d61b-4916-a059-5daa7e2b6277-kube-api-access-q45sd\") pod \"glance-operator-controller-manager-5dc44df7d5-6tfdd\" (UID: \"18e5b74c-d61b-4916-a059-5daa7e2b6277\") " pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.779427 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq59b\" (UniqueName: \"kubernetes.io/projected/c951d569-cbb5-4525-b66d-07f2473db97a-kube-api-access-dq59b\") pod \"heat-operator-controller-manager-54b4974c45-nn686\" (UID: \"c951d569-cbb5-4525-b66d-07f2473db97a\") " pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.784838 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.798122 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.810585 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.820260 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqpn\" (UniqueName: \"kubernetes.io/projected/614bf51f-2fa7-48ae-a9e2-2f371656f326-kube-api-access-2gqpn\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-4p87x\" (UID: \"614bf51f-2fa7-48ae-a9e2-2f371656f326\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.820410 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert\") pod \"infra-operator-controller-manager-658588b8c9-nbssz\" (UID: \"7d3c45dc-4628-4b73-b123-dda0b0cb4d72\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.820479 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxk2g\" (UniqueName: \"kubernetes.io/projected/0ab9d525-93a8-4920-a6e4-e70dfd942ce3-kube-api-access-jxk2g\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-4m69r\" (UID: \"0ab9d525-93a8-4920-a6e4-e70dfd942ce3\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.820651 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4cpr\" (UniqueName: \"kubernetes.io/projected/c7b475c9-9590-41e5-9bd4-d7f9fb04958c-kube-api-access-s4cpr\") pod \"ironic-operator-controller-manager-649675d675-jxh84\" (UID: \"c7b475c9-9590-41e5-9bd4-d7f9fb04958c\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.820728 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bsp5\" (UniqueName: \"kubernetes.io/projected/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-kube-api-access-9bsp5\") pod \"infra-operator-controller-manager-658588b8c9-nbssz\" (UID: \"7d3c45dc-4628-4b73-b123-dda0b0cb4d72\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.829359 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jpd8r" Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.835132 4888 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 06 15:14:31 crc kubenswrapper[4888]: E1006 15:14:31.835222 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert podName:7d3c45dc-4628-4b73-b123-dda0b0cb4d72 nodeName:}" failed. No retries permitted until 2025-10-06 15:14:32.335199178 +0000 UTC m=+812.147549896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert") pod "infra-operator-controller-manager-658588b8c9-nbssz" (UID: "7d3c45dc-4628-4b73-b123-dda0b0cb4d72") : secret "infra-operator-webhook-server-cert" not found Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.847148 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.854467 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.922614 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxzq\" (UniqueName: \"kubernetes.io/projected/5a1c3c1c-2a06-49a0-9189-acbcdd0053c6-kube-api-access-zvxzq\") pod \"manila-operator-controller-manager-65d89cfd9f-mmfzd\" (UID: \"5a1c3c1c-2a06-49a0-9189-acbcdd0053c6\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.923038 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9x4p\" (UniqueName: \"kubernetes.io/projected/e3c80163-0837-4095-96c9-2d51ac49b7c4-kube-api-access-v9x4p\") pod \"neutron-operator-controller-manager-8d984cc4d-wwg8m\" (UID: \"e3c80163-0837-4095-96c9-2d51ac49b7c4\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.923083 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqpn\" (UniqueName: \"kubernetes.io/projected/614bf51f-2fa7-48ae-a9e2-2f371656f326-kube-api-access-2gqpn\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-4p87x\" (UID: \"614bf51f-2fa7-48ae-a9e2-2f371656f326\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.923163 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxk2g\" (UniqueName: \"kubernetes.io/projected/0ab9d525-93a8-4920-a6e4-e70dfd942ce3-kube-api-access-jxk2g\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-4m69r\" (UID: \"0ab9d525-93a8-4920-a6e4-e70dfd942ce3\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.935741 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4cpr\" (UniqueName: \"kubernetes.io/projected/c7b475c9-9590-41e5-9bd4-d7f9fb04958c-kube-api-access-s4cpr\") pod \"ironic-operator-controller-manager-649675d675-jxh84\" (UID: \"c7b475c9-9590-41e5-9bd4-d7f9fb04958c\") " pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.935913 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.946515 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bsp5\" (UniqueName: \"kubernetes.io/projected/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-kube-api-access-9bsp5\") pod \"infra-operator-controller-manager-658588b8c9-nbssz\" (UID: \"7d3c45dc-4628-4b73-b123-dda0b0cb4d72\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.947197 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.957145 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.959101 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqpn\" (UniqueName: \"kubernetes.io/projected/614bf51f-2fa7-48ae-a9e2-2f371656f326-kube-api-access-2gqpn\") pod \"keystone-operator-controller-manager-7b5ccf6d9c-4p87x\" (UID: \"614bf51f-2fa7-48ae-a9e2-2f371656f326\") " pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.966574 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxk2g\" (UniqueName: \"kubernetes.io/projected/0ab9d525-93a8-4920-a6e4-e70dfd942ce3-kube-api-access-jxk2g\") pod \"mariadb-operator-controller-manager-6cd6d7bdf5-4m69r\" (UID: \"0ab9d525-93a8-4920-a6e4-e70dfd942ce3\") " pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.966912 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.974487 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.975691 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.981486 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6zncz" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.984778 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5"] Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.987813 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.993765 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-n58s8" Oct 06 15:14:31 crc kubenswrapper[4888]: I1006 15:14:31.994283 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.005875 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.017110 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.025928 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9x4p\" (UniqueName: \"kubernetes.io/projected/e3c80163-0837-4095-96c9-2d51ac49b7c4-kube-api-access-v9x4p\") pod \"neutron-operator-controller-manager-8d984cc4d-wwg8m\" (UID: \"e3c80163-0837-4095-96c9-2d51ac49b7c4\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.026023 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvxzq\" (UniqueName: \"kubernetes.io/projected/5a1c3c1c-2a06-49a0-9189-acbcdd0053c6-kube-api-access-zvxzq\") pod \"manila-operator-controller-manager-65d89cfd9f-mmfzd\" (UID: \"5a1c3c1c-2a06-49a0-9189-acbcdd0053c6\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.027013 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.036049 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.037398 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.047196 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.047481 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-454g6" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.056339 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.057321 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.061280 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-d6m9d" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.068833 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9x4p\" (UniqueName: \"kubernetes.io/projected/e3c80163-0837-4095-96c9-2d51ac49b7c4-kube-api-access-v9x4p\") pod \"neutron-operator-controller-manager-8d984cc4d-wwg8m\" (UID: \"e3c80163-0837-4095-96c9-2d51ac49b7c4\") " pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.069442 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvxzq\" (UniqueName: \"kubernetes.io/projected/5a1c3c1c-2a06-49a0-9189-acbcdd0053c6-kube-api-access-zvxzq\") pod \"manila-operator-controller-manager-65d89cfd9f-mmfzd\" (UID: \"5a1c3c1c-2a06-49a0-9189-acbcdd0053c6\") " pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.079130 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.080708 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.097831 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.099335 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.101300 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-l4g7b" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.117438 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.122515 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5lmmr" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.132044 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.143543 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.144314 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4\" (UID: \"216a208a-e34a-4796-a72d-79fb0dba1491\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.144448 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znhph\" (UniqueName: \"kubernetes.io/projected/b725fd76-028e-4dc0-bbc5-8d18cf1e667b-kube-api-access-znhph\") pod \"nova-operator-controller-manager-7c7fc454ff-z5nwh\" (UID: \"b725fd76-028e-4dc0-bbc5-8d18cf1e667b\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.144511 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwhgt\" (UniqueName: \"kubernetes.io/projected/b798c5fc-8252-4301-b5f2-6d47107266c9-kube-api-access-vwhgt\") pod \"octavia-operator-controller-manager-7468f855d8-pdwl5\" (UID: \"b798c5fc-8252-4301-b5f2-6d47107266c9\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.144557 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcnjs\" (UniqueName: \"kubernetes.io/projected/216a208a-e34a-4796-a72d-79fb0dba1491-kube-api-access-lcnjs\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4\" (UID: \"216a208a-e34a-4796-a72d-79fb0dba1491\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.159883 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.163241 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.164431 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.176739 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.201922 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.201986 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.202517 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.203672 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.204547 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4tbft" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.206023 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.206112 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.239195 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v8c55" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.251605 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4\" (UID: \"216a208a-e34a-4796-a72d-79fb0dba1491\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.251658 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znhph\" (UniqueName: \"kubernetes.io/projected/b725fd76-028e-4dc0-bbc5-8d18cf1e667b-kube-api-access-znhph\") pod \"nova-operator-controller-manager-7c7fc454ff-z5nwh\" (UID: \"b725fd76-028e-4dc0-bbc5-8d18cf1e667b\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.251701 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fzpc\" (UniqueName: \"kubernetes.io/projected/23fa3eba-f10e-42b2-bc39-2df07d518a0e-kube-api-access-4fzpc\") pod \"ovn-operator-controller-manager-6d8b6f9b9-jsk2z\" (UID: \"23fa3eba-f10e-42b2-bc39-2df07d518a0e\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.251741 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwhgt\" (UniqueName: \"kubernetes.io/projected/b798c5fc-8252-4301-b5f2-6d47107266c9-kube-api-access-vwhgt\") pod \"octavia-operator-controller-manager-7468f855d8-pdwl5\" (UID: \"b798c5fc-8252-4301-b5f2-6d47107266c9\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.251781 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcnjs\" (UniqueName: \"kubernetes.io/projected/216a208a-e34a-4796-a72d-79fb0dba1491-kube-api-access-lcnjs\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4\" (UID: \"216a208a-e34a-4796-a72d-79fb0dba1491\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.251829 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mp2\" (UniqueName: \"kubernetes.io/projected/edac5ad0-266b-449a-a2a8-95eb9afb0348-kube-api-access-h2mp2\") pod \"placement-operator-controller-manager-54689d9f88-m5c7d\" (UID: \"edac5ad0-266b-449a-a2a8-95eb9afb0348\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.251861 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdv8t\" (UniqueName: \"kubernetes.io/projected/470e257a-9b82-4ffc-88a2-974afe3d6abb-kube-api-access-zdv8t\") pod \"swift-operator-controller-manager-6859f9b676-qtk96\" (UID: \"470e257a-9b82-4ffc-88a2-974afe3d6abb\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" Oct 06 15:14:32 crc kubenswrapper[4888]: E1006 15:14:32.252013 4888 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 15:14:32 crc kubenswrapper[4888]: E1006 15:14:32.252063 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert podName:216a208a-e34a-4796-a72d-79fb0dba1491 nodeName:}" failed. No retries permitted until 2025-10-06 15:14:32.752045241 +0000 UTC m=+812.564395959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert") pod "openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" (UID: "216a208a-e34a-4796-a72d-79fb0dba1491") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.305295 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.307763 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znhph\" (UniqueName: \"kubernetes.io/projected/b725fd76-028e-4dc0-bbc5-8d18cf1e667b-kube-api-access-znhph\") pod \"nova-operator-controller-manager-7c7fc454ff-z5nwh\" (UID: \"b725fd76-028e-4dc0-bbc5-8d18cf1e667b\") " pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.353943 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fzpc\" (UniqueName: \"kubernetes.io/projected/23fa3eba-f10e-42b2-bc39-2df07d518a0e-kube-api-access-4fzpc\") pod \"ovn-operator-controller-manager-6d8b6f9b9-jsk2z\" (UID: \"23fa3eba-f10e-42b2-bc39-2df07d518a0e\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.353990 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxfjt\" (UniqueName: \"kubernetes.io/projected/b3acc46c-a819-4e19-8534-34855edcdbaa-kube-api-access-zxfjt\") pod \"telemetry-operator-controller-manager-5d4d74dd89-zdzkx\" (UID: \"b3acc46c-a819-4e19-8534-34855edcdbaa\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.354052 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2mp2\" (UniqueName: \"kubernetes.io/projected/edac5ad0-266b-449a-a2a8-95eb9afb0348-kube-api-access-h2mp2\") pod \"placement-operator-controller-manager-54689d9f88-m5c7d\" (UID: \"edac5ad0-266b-449a-a2a8-95eb9afb0348\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.354076 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdv8t\" (UniqueName: \"kubernetes.io/projected/470e257a-9b82-4ffc-88a2-974afe3d6abb-kube-api-access-zdv8t\") pod \"swift-operator-controller-manager-6859f9b676-qtk96\" (UID: \"470e257a-9b82-4ffc-88a2-974afe3d6abb\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.354113 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert\") pod \"infra-operator-controller-manager-658588b8c9-nbssz\" (UID: \"7d3c45dc-4628-4b73-b123-dda0b0cb4d72\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.354163 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngwf\" (UniqueName: \"kubernetes.io/projected/99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2-kube-api-access-hngwf\") pod \"test-operator-controller-manager-5cd5cb47d7-c6fgp\" (UID: \"99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" Oct 06 15:14:32 crc kubenswrapper[4888]: E1006 15:14:32.354715 4888 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 06 15:14:32 crc kubenswrapper[4888]: E1006 15:14:32.354753 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert podName:7d3c45dc-4628-4b73-b123-dda0b0cb4d72 nodeName:}" failed. No retries permitted until 2025-10-06 15:14:33.354740331 +0000 UTC m=+813.167091049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert") pod "infra-operator-controller-manager-658588b8c9-nbssz" (UID: "7d3c45dc-4628-4b73-b123-dda0b0cb4d72") : secret "infra-operator-webhook-server-cert" not found Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.363535 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.365323 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.375536 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.383242 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-m5xpm" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.401450 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwhgt\" (UniqueName: \"kubernetes.io/projected/b798c5fc-8252-4301-b5f2-6d47107266c9-kube-api-access-vwhgt\") pod \"octavia-operator-controller-manager-7468f855d8-pdwl5\" (UID: \"b798c5fc-8252-4301-b5f2-6d47107266c9\") " pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.402164 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.402726 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcnjs\" (UniqueName: \"kubernetes.io/projected/216a208a-e34a-4796-a72d-79fb0dba1491-kube-api-access-lcnjs\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4\" (UID: \"216a208a-e34a-4796-a72d-79fb0dba1491\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.407587 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.456424 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hngwf\" (UniqueName: \"kubernetes.io/projected/99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2-kube-api-access-hngwf\") pod \"test-operator-controller-manager-5cd5cb47d7-c6fgp\" (UID: \"99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.456723 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxfjt\" (UniqueName: \"kubernetes.io/projected/b3acc46c-a819-4e19-8534-34855edcdbaa-kube-api-access-zxfjt\") pod \"telemetry-operator-controller-manager-5d4d74dd89-zdzkx\" (UID: \"b3acc46c-a819-4e19-8534-34855edcdbaa\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.456872 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2gx5\" (UniqueName: \"kubernetes.io/projected/cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9-kube-api-access-c2gx5\") pod \"watcher-operator-controller-manager-6cbc6dd547-vbgpl\" (UID: \"cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.468491 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2mp2\" (UniqueName: \"kubernetes.io/projected/edac5ad0-266b-449a-a2a8-95eb9afb0348-kube-api-access-h2mp2\") pod \"placement-operator-controller-manager-54689d9f88-m5c7d\" (UID: \"edac5ad0-266b-449a-a2a8-95eb9afb0348\") " pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.469783 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdv8t\" (UniqueName: \"kubernetes.io/projected/470e257a-9b82-4ffc-88a2-974afe3d6abb-kube-api-access-zdv8t\") pod \"swift-operator-controller-manager-6859f9b676-qtk96\" (UID: \"470e257a-9b82-4ffc-88a2-974afe3d6abb\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.477963 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fzpc\" (UniqueName: \"kubernetes.io/projected/23fa3eba-f10e-42b2-bc39-2df07d518a0e-kube-api-access-4fzpc\") pod \"ovn-operator-controller-manager-6d8b6f9b9-jsk2z\" (UID: \"23fa3eba-f10e-42b2-bc39-2df07d518a0e\") " pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.486545 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.561410 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hngwf\" (UniqueName: \"kubernetes.io/projected/99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2-kube-api-access-hngwf\") pod \"test-operator-controller-manager-5cd5cb47d7-c6fgp\" (UID: \"99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.563269 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxfjt\" (UniqueName: \"kubernetes.io/projected/b3acc46c-a819-4e19-8534-34855edcdbaa-kube-api-access-zxfjt\") pod \"telemetry-operator-controller-manager-5d4d74dd89-zdzkx\" (UID: \"b3acc46c-a819-4e19-8534-34855edcdbaa\") " pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.571850 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2gx5\" (UniqueName: \"kubernetes.io/projected/cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9-kube-api-access-c2gx5\") pod \"watcher-operator-controller-manager-6cbc6dd547-vbgpl\" (UID: \"cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.641589 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2gx5\" (UniqueName: \"kubernetes.io/projected/cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9-kube-api-access-c2gx5\") pod \"watcher-operator-controller-manager-6cbc6dd547-vbgpl\" (UID: \"cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9\") " pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.722977 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.724191 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.729649 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.729867 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dp6c4" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.746145 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.771714 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.771754 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.802672 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4\" (UID: \"216a208a-e34a-4796-a72d-79fb0dba1491\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:32 crc kubenswrapper[4888]: E1006 15:14:32.802940 4888 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 15:14:32 crc kubenswrapper[4888]: E1006 15:14:32.803004 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert podName:216a208a-e34a-4796-a72d-79fb0dba1491 nodeName:}" failed. No retries permitted until 2025-10-06 15:14:33.802987393 +0000 UTC m=+813.615338111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert") pod "openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" (UID: "216a208a-e34a-4796-a72d-79fb0dba1491") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.842117 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.861465 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.903808 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert\") pod \"openstack-operator-controller-manager-847bc59d9d-djwkk\" (UID: \"3ad05938-27d0-4006-a663-5f3ae2526053\") " pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.903891 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l94xh\" (UniqueName: \"kubernetes.io/projected/3ad05938-27d0-4006-a663-5f3ae2526053-kube-api-access-l94xh\") pod \"openstack-operator-controller-manager-847bc59d9d-djwkk\" (UID: \"3ad05938-27d0-4006-a663-5f3ae2526053\") " pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.911446 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.925539 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr"] Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.926484 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr" Oct 06 15:14:32 crc kubenswrapper[4888]: I1006 15:14:32.988333 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jplz7" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.006791 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert\") pod \"openstack-operator-controller-manager-847bc59d9d-djwkk\" (UID: \"3ad05938-27d0-4006-a663-5f3ae2526053\") " pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.006884 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l94xh\" (UniqueName: \"kubernetes.io/projected/3ad05938-27d0-4006-a663-5f3ae2526053-kube-api-access-l94xh\") pod \"openstack-operator-controller-manager-847bc59d9d-djwkk\" (UID: \"3ad05938-27d0-4006-a663-5f3ae2526053\") " pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:33 crc kubenswrapper[4888]: E1006 15:14:33.007368 4888 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 06 15:14:33 crc kubenswrapper[4888]: E1006 15:14:33.007448 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert podName:3ad05938-27d0-4006-a663-5f3ae2526053 nodeName:}" failed. No retries permitted until 2025-10-06 15:14:33.507426344 +0000 UTC m=+813.319777062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert") pod "openstack-operator-controller-manager-847bc59d9d-djwkk" (UID: "3ad05938-27d0-4006-a663-5f3ae2526053") : secret "webhook-server-cert" not found Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.013548 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr"] Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.036305 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l94xh\" (UniqueName: \"kubernetes.io/projected/3ad05938-27d0-4006-a663-5f3ae2526053-kube-api-access-l94xh\") pod \"openstack-operator-controller-manager-847bc59d9d-djwkk\" (UID: \"3ad05938-27d0-4006-a663-5f3ae2526053\") " pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.110378 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnkx5\" (UniqueName: \"kubernetes.io/projected/a4a728bc-e48c-43b7-b143-aded2946ee76-kube-api-access-vnkx5\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr\" (UID: \"a4a728bc-e48c-43b7-b143-aded2946ee76\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.211438 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnkx5\" (UniqueName: \"kubernetes.io/projected/a4a728bc-e48c-43b7-b143-aded2946ee76-kube-api-access-vnkx5\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr\" (UID: \"a4a728bc-e48c-43b7-b143-aded2946ee76\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.217818 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx"] Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.246375 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnkx5\" (UniqueName: \"kubernetes.io/projected/a4a728bc-e48c-43b7-b143-aded2946ee76-kube-api-access-vnkx5\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr\" (UID: \"a4a728bc-e48c-43b7-b143-aded2946ee76\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.358071 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.413932 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert\") pod \"infra-operator-controller-manager-658588b8c9-nbssz\" (UID: \"7d3c45dc-4628-4b73-b123-dda0b0cb4d72\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.431407 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d3c45dc-4628-4b73-b123-dda0b0cb4d72-cert\") pod \"infra-operator-controller-manager-658588b8c9-nbssz\" (UID: \"7d3c45dc-4628-4b73-b123-dda0b0cb4d72\") " pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.517454 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert\") pod \"openstack-operator-controller-manager-847bc59d9d-djwkk\" (UID: \"3ad05938-27d0-4006-a663-5f3ae2526053\") " pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:33 crc kubenswrapper[4888]: E1006 15:14:33.517617 4888 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 06 15:14:33 crc kubenswrapper[4888]: E1006 15:14:33.517664 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert podName:3ad05938-27d0-4006-a663-5f3ae2526053 nodeName:}" failed. No retries permitted until 2025-10-06 15:14:34.517648765 +0000 UTC m=+814.329999483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert") pod "openstack-operator-controller-manager-847bc59d9d-djwkk" (UID: "3ad05938-27d0-4006-a663-5f3ae2526053") : secret "webhook-server-cert" not found Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.690703 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.838433 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4\" (UID: \"216a208a-e34a-4796-a72d-79fb0dba1491\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.859060 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/216a208a-e34a-4796-a72d-79fb0dba1491-cert\") pod \"openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4\" (UID: \"216a208a-e34a-4796-a72d-79fb0dba1491\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:33 crc kubenswrapper[4888]: I1006 15:14:33.913415 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.010289 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.023254 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" event={"ID":"ba1d4452-d22b-4724-93eb-bf70500f2040","Type":"ContainerStarted","Data":"08b24ab38ec574153bac94a79984c2596411403911c5bce1e5fea32a6ad69ea2"} Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.028470 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-54b4974c45-nn686"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.085256 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r"] Oct 06 15:14:34 crc kubenswrapper[4888]: W1006 15:14:34.096936 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc951d569_cbb5_4525_b66d_07f2473db97a.slice/crio-93e541a9295b0852992ee5f9ac9d7ea09378c09cb7fc9f44c2be029c59672019 WatchSource:0}: Error finding container 93e541a9295b0852992ee5f9ac9d7ea09378c09cb7fc9f44c2be029c59672019: Status 404 returned error can't find the container with id 93e541a9295b0852992ee5f9ac9d7ea09378c09cb7fc9f44c2be029c59672019 Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.099073 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.106336 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.114655 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg"] Oct 06 15:14:34 crc kubenswrapper[4888]: W1006 15:14:34.117366 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab9d525_93a8_4920_a6e4_e70dfd942ce3.slice/crio-e32410b98f31465d8e85a4bda6afcccc7dd90dc3804ee4cd84438ae37fb293dc WatchSource:0}: Error finding container e32410b98f31465d8e85a4bda6afcccc7dd90dc3804ee4cd84438ae37fb293dc: Status 404 returned error can't find the container with id e32410b98f31465d8e85a4bda6afcccc7dd90dc3804ee4cd84438ae37fb293dc Oct 06 15:14:34 crc kubenswrapper[4888]: W1006 15:14:34.119389 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3c80163_0837_4095_96c9_2d51ac49b7c4.slice/crio-76f95fa4094bd937563f8d3ce0492f91e0cf66e0703259e07b77f95235771e3b WatchSource:0}: Error finding container 76f95fa4094bd937563f8d3ce0492f91e0cf66e0703259e07b77f95235771e3b: Status 404 returned error can't find the container with id 76f95fa4094bd937563f8d3ce0492f91e0cf66e0703259e07b77f95235771e3b Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.120004 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-649675d675-jxh84"] Oct 06 15:14:34 crc kubenswrapper[4888]: W1006 15:14:34.136167 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7b475c9_9590_41e5_9bd4_d7f9fb04958c.slice/crio-17d7affef90968962ae74e0ee76cdc507796a670f43979f15fa8c9aa54f8fa98 WatchSource:0}: Error finding container 17d7affef90968962ae74e0ee76cdc507796a670f43979f15fa8c9aa54f8fa98: Status 404 returned error can't find the container with id 17d7affef90968962ae74e0ee76cdc507796a670f43979f15fa8c9aa54f8fa98 Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.188013 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.201484 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z"] Oct 06 15:14:34 crc kubenswrapper[4888]: W1006 15:14:34.217243 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3e1786a_e54d_4a41_a974_fab79300a4b9.slice/crio-54cd8e501b4001454b959808c60864275d24eac0df023cc18caf8c6b3a26949b WatchSource:0}: Error finding container 54cd8e501b4001454b959808c60864275d24eac0df023cc18caf8c6b3a26949b: Status 404 returned error can't find the container with id 54cd8e501b4001454b959808c60864275d24eac0df023cc18caf8c6b3a26949b Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.390159 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5"] Oct 06 15:14:34 crc kubenswrapper[4888]: W1006 15:14:34.407609 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb798c5fc_8252_4301_b5f2_6d47107266c9.slice/crio-9e479955502f0df034ed1adc28fa8ff34b46f4f70139aec8d39a04eab765d8a8 WatchSource:0}: Error finding container 9e479955502f0df034ed1adc28fa8ff34b46f4f70139aec8d39a04eab765d8a8: Status 404 returned error can't find the container with id 9e479955502f0df034ed1adc28fa8ff34b46f4f70139aec8d39a04eab765d8a8 Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.562016 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert\") pod \"openstack-operator-controller-manager-847bc59d9d-djwkk\" (UID: \"3ad05938-27d0-4006-a663-5f3ae2526053\") " pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.576322 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ad05938-27d0-4006-a663-5f3ae2526053-cert\") pod \"openstack-operator-controller-manager-847bc59d9d-djwkk\" (UID: \"3ad05938-27d0-4006-a663-5f3ae2526053\") " pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.632966 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.655197 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.660314 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.667446 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z"] Oct 06 15:14:34 crc kubenswrapper[4888]: W1006 15:14:34.684075 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc6afaf6_e217_4d09_8cdf_d0ad4dd79db9.slice/crio-f4b34b68a4de3138cd44d38ddef6b38e3f00b57678387baefb1f9cae73b852e4 WatchSource:0}: Error finding container f4b34b68a4de3138cd44d38ddef6b38e3f00b57678387baefb1f9cae73b852e4: Status 404 returned error can't find the container with id f4b34b68a4de3138cd44d38ddef6b38e3f00b57678387baefb1f9cae73b852e4 Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.709842 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.773534 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x"] Oct 06 15:14:34 crc kubenswrapper[4888]: E1006 15:14:34.778019 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:a6f1dcab931fd4b818010607ede65150742563b3c81a3ad3d739ef7953cace0b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gqpn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b5ccf6d9c-4p87x_openstack-operators(614bf51f-2fa7-48ae-a9e2-2f371656f326): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 15:14:34 crc kubenswrapper[4888]: E1006 15:14:34.793103 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znhph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7c7fc454ff-z5nwh_openstack-operators(b725fd76-028e-4dc0-bbc5-8d18cf1e667b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.865787 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96"] Oct 06 15:14:34 crc kubenswrapper[4888]: W1006 15:14:34.868562 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99965d92_2fb3_4bf4_8ccf_ab574aa1a4c2.slice/crio-7d4d8fe07476f5f8701f6c0f4fb407b0aaa946ad2f336d27b75ac1dc502a1482 WatchSource:0}: Error finding container 7d4d8fe07476f5f8701f6c0f4fb407b0aaa946ad2f336d27b75ac1dc502a1482: Status 404 returned error can't find the container with id 7d4d8fe07476f5f8701f6c0f4fb407b0aaa946ad2f336d27b75ac1dc502a1482 Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.883478 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d"] Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.902958 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp"] Oct 06 15:14:34 crc kubenswrapper[4888]: E1006 15:14:34.909494 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:adc23c5fd1aece2b16dc8e22ceed628f9a719455e39d3f98c77544665c6749e1,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2mp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-54689d9f88-m5c7d_openstack-operators(edac5ad0-266b-449a-a2a8-95eb9afb0348): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 15:14:34 crc kubenswrapper[4888]: E1006 15:14:34.909807 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hngwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd5cb47d7-c6fgp_openstack-operators(99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 15:14:34 crc kubenswrapper[4888]: I1006 15:14:34.995759 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4"] Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.008867 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx"] Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.018163 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz"] Oct 06 15:14:35 crc kubenswrapper[4888]: E1006 15:14:35.039474 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcnjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4_openstack-operators(216a208a-e34a-4796-a72d-79fb0dba1491): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 06 15:14:35 crc kubenswrapper[4888]: W1006 15:14:35.051543 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d3c45dc_4628_4b73_b123_dda0b0cb4d72.slice/crio-6daca23af4360e7e8a7ae35947e26e16fa496737f9e85359c19e8d04a91d2f5b WatchSource:0}: Error finding container 6daca23af4360e7e8a7ae35947e26e16fa496737f9e85359c19e8d04a91d2f5b: Status 404 returned error can't find the container with id 6daca23af4360e7e8a7ae35947e26e16fa496737f9e85359c19e8d04a91d2f5b Oct 06 15:14:35 crc kubenswrapper[4888]: W1006 15:14:35.140939 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3acc46c_a819_4e19_8534_34855edcdbaa.slice/crio-385bbc041be13b0295ed568a489add4b161c6a9196f4fcd3973557e13a09ddf5 WatchSource:0}: Error finding container 385bbc041be13b0295ed568a489add4b161c6a9196f4fcd3973557e13a09ddf5: Status 404 returned error can't find the container with id 385bbc041be13b0295ed568a489add4b161c6a9196f4fcd3973557e13a09ddf5 Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.231240 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" event={"ID":"0ab9d525-93a8-4920-a6e4-e70dfd942ce3","Type":"ContainerStarted","Data":"e32410b98f31465d8e85a4bda6afcccc7dd90dc3804ee4cd84438ae37fb293dc"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.235199 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" event={"ID":"23fa3eba-f10e-42b2-bc39-2df07d518a0e","Type":"ContainerStarted","Data":"dec56c0fd06bc651a131721e63726215e1a33985024835db3921f701d2f08224"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.240478 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" event={"ID":"c951d569-cbb5-4525-b66d-07f2473db97a","Type":"ContainerStarted","Data":"93e541a9295b0852992ee5f9ac9d7ea09378c09cb7fc9f44c2be029c59672019"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.242209 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" event={"ID":"b725fd76-028e-4dc0-bbc5-8d18cf1e667b","Type":"ContainerStarted","Data":"0d0c28771d11c0c6843ecbbe8e022b3f8710c393244aaa22671d4d6640b3db7b"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.246854 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" event={"ID":"614bf51f-2fa7-48ae-a9e2-2f371656f326","Type":"ContainerStarted","Data":"541d9d253f9240e209ea03ecb1546af72a242a2b003ac455d44314ef026983a9"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.248716 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr" event={"ID":"a4a728bc-e48c-43b7-b143-aded2946ee76","Type":"ContainerStarted","Data":"eb181ee72f5ab32649e6a15b0abdc45ec20d615002d7a748fc740751929597f5"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.262158 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" event={"ID":"216a208a-e34a-4796-a72d-79fb0dba1491","Type":"ContainerStarted","Data":"66516cc6d6240d7ce45c945cb7a23e4ab309223fd1dc080bfd8c11836353b815"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.271260 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" event={"ID":"e3c80163-0837-4095-96c9-2d51ac49b7c4","Type":"ContainerStarted","Data":"76f95fa4094bd937563f8d3ce0492f91e0cf66e0703259e07b77f95235771e3b"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.280100 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" event={"ID":"b798c5fc-8252-4301-b5f2-6d47107266c9","Type":"ContainerStarted","Data":"9e479955502f0df034ed1adc28fa8ff34b46f4f70139aec8d39a04eab765d8a8"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.294439 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" event={"ID":"99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2","Type":"ContainerStarted","Data":"7d4d8fe07476f5f8701f6c0f4fb407b0aaa946ad2f336d27b75ac1dc502a1482"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.296270 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" event={"ID":"a3e1786a-e54d-4a41-a974-fab79300a4b9","Type":"ContainerStarted","Data":"54cd8e501b4001454b959808c60864275d24eac0df023cc18caf8c6b3a26949b"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.297572 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" event={"ID":"470e257a-9b82-4ffc-88a2-974afe3d6abb","Type":"ContainerStarted","Data":"fd801981f1330db1c6d648a7e6b73aee24b0ee8c62a80c34075e3f8a61a7f8a7"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.298492 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" event={"ID":"cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9","Type":"ContainerStarted","Data":"f4b34b68a4de3138cd44d38ddef6b38e3f00b57678387baefb1f9cae73b852e4"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.300813 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" event={"ID":"4871270f-8f10-42b8-880a-7ff6ab0d1476","Type":"ContainerStarted","Data":"12628b81d8af83b2cb0c7c3ff64bbb5505487259268f81ad7f1f99946763f264"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.306498 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" event={"ID":"5a1c3c1c-2a06-49a0-9189-acbcdd0053c6","Type":"ContainerStarted","Data":"0d929326d8c6f23aa91b14a1599cbc3a7c5ed3d40c8cf1536240db56d02ada9f"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.309954 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" event={"ID":"18e5b74c-d61b-4916-a059-5daa7e2b6277","Type":"ContainerStarted","Data":"6a2ad177dfad6b27930b7e4b4b63025db7739f58151b317991edf84aee8b245f"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.314398 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" event={"ID":"edac5ad0-266b-449a-a2a8-95eb9afb0348","Type":"ContainerStarted","Data":"1ec7c4551c242388709b34632067baf5c7d452eef43f7977db9dcaeb3b47e3d9"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.316392 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" event={"ID":"c7b475c9-9590-41e5-9bd4-d7f9fb04958c","Type":"ContainerStarted","Data":"17d7affef90968962ae74e0ee76cdc507796a670f43979f15fa8c9aa54f8fa98"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.318482 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" event={"ID":"919edfbf-4b21-44cb-b821-3d3294a2beb1","Type":"ContainerStarted","Data":"998c112ba784a4894d0c314c5973c330e06e29588c319f06b438f603d7eb6a54"} Oct 06 15:14:35 crc kubenswrapper[4888]: I1006 15:14:35.576693 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk"] Oct 06 15:14:35 crc kubenswrapper[4888]: E1006 15:14:35.600704 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" podUID="b725fd76-028e-4dc0-bbc5-8d18cf1e667b" Oct 06 15:14:35 crc kubenswrapper[4888]: E1006 15:14:35.641629 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" podUID="614bf51f-2fa7-48ae-a9e2-2f371656f326" Oct 06 15:14:35 crc kubenswrapper[4888]: E1006 15:14:35.653611 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" podUID="216a208a-e34a-4796-a72d-79fb0dba1491" Oct 06 15:14:35 crc kubenswrapper[4888]: E1006 15:14:35.661515 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" podUID="99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2" Oct 06 15:14:35 crc kubenswrapper[4888]: E1006 15:14:35.732641 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" podUID="edac5ad0-266b-449a-a2a8-95eb9afb0348" Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.354832 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" event={"ID":"216a208a-e34a-4796-a72d-79fb0dba1491","Type":"ContainerStarted","Data":"6ca91ee7a10dfa143fa26ddeb1b8ce2bbcd0d2f63b5c22bf2a44c59b24cd574c"} Oct 06 15:14:36 crc kubenswrapper[4888]: E1006 15:14:36.376914 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" podUID="216a208a-e34a-4796-a72d-79fb0dba1491" Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.387139 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" event={"ID":"edac5ad0-266b-449a-a2a8-95eb9afb0348","Type":"ContainerStarted","Data":"18d7279475f82089c4d2b7bc44b3aa83225a474506492033d8ab1b8aa4198c87"} Oct 06 15:14:36 crc kubenswrapper[4888]: E1006 15:14:36.399224 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:adc23c5fd1aece2b16dc8e22ceed628f9a719455e39d3f98c77544665c6749e1\\\"\"" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" podUID="edac5ad0-266b-449a-a2a8-95eb9afb0348" Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.418395 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" event={"ID":"b725fd76-028e-4dc0-bbc5-8d18cf1e667b","Type":"ContainerStarted","Data":"aee0f4ce7c5137b3acc15d29e899513d49f4f2c441a0c65a32440e17918d742b"} Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.420308 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" event={"ID":"614bf51f-2fa7-48ae-a9e2-2f371656f326","Type":"ContainerStarted","Data":"6514313498de6aaf1d96551b26f9a2e9b45be14f0d1b474eac660a2a8d67a6a0"} Oct 06 15:14:36 crc kubenswrapper[4888]: E1006 15:14:36.426993 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:a6f1dcab931fd4b818010607ede65150742563b3c81a3ad3d739ef7953cace0b\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" podUID="614bf51f-2fa7-48ae-a9e2-2f371656f326" Oct 06 15:14:36 crc kubenswrapper[4888]: E1006 15:14:36.427074 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" podUID="b725fd76-028e-4dc0-bbc5-8d18cf1e667b" Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.434769 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" event={"ID":"7d3c45dc-4628-4b73-b123-dda0b0cb4d72","Type":"ContainerStarted","Data":"6daca23af4360e7e8a7ae35947e26e16fa496737f9e85359c19e8d04a91d2f5b"} Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.436427 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" event={"ID":"b3acc46c-a819-4e19-8534-34855edcdbaa","Type":"ContainerStarted","Data":"385bbc041be13b0295ed568a489add4b161c6a9196f4fcd3973557e13a09ddf5"} Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.461701 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" event={"ID":"99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2","Type":"ContainerStarted","Data":"35246fad68b4f0cf84cc6c85d063b9db6234fe8fc4aee15148e1819d6c6e2949"} Oct 06 15:14:36 crc kubenswrapper[4888]: E1006 15:14:36.469574 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" podUID="99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2" Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.474881 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" event={"ID":"3ad05938-27d0-4006-a663-5f3ae2526053","Type":"ContainerStarted","Data":"93eb9fca96783a668b5a36c905ba3d6f76514a3c0896f8c2f948158141b5a90e"} Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.474933 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" event={"ID":"3ad05938-27d0-4006-a663-5f3ae2526053","Type":"ContainerStarted","Data":"7f66ff6f5ed8387c94e138e4a0810fb09a8c144be65b8fd41e3960ed8b261093"} Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.475718 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:36 crc kubenswrapper[4888]: I1006 15:14:36.643879 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" podStartSLOduration=4.643838249 podStartE2EDuration="4.643838249s" podCreationTimestamp="2025-10-06 15:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:14:36.634881208 +0000 UTC m=+816.447231936" watchObservedRunningTime="2025-10-06 15:14:36.643838249 +0000 UTC m=+816.456188967" Oct 06 15:14:37 crc kubenswrapper[4888]: I1006 15:14:37.524577 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" event={"ID":"3ad05938-27d0-4006-a663-5f3ae2526053","Type":"ContainerStarted","Data":"3403bdf7dad942e4a9b85340f68cea5630329f7f6e7cf064f89a0dd8d3a019ff"} Oct 06 15:14:37 crc kubenswrapper[4888]: E1006 15:14:37.529418 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:adc23c5fd1aece2b16dc8e22ceed628f9a719455e39d3f98c77544665c6749e1\\\"\"" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" podUID="edac5ad0-266b-449a-a2a8-95eb9afb0348" Oct 06 15:14:37 crc kubenswrapper[4888]: E1006 15:14:37.529433 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:bcd1acac74e68eea5a9c3b7ba1bcb29d3a5b43423fc23c19ad4715bdac41f799\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" podUID="216a208a-e34a-4796-a72d-79fb0dba1491" Oct 06 15:14:37 crc kubenswrapper[4888]: E1006 15:14:37.529418 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:a6f1dcab931fd4b818010607ede65150742563b3c81a3ad3d739ef7953cace0b\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" podUID="614bf51f-2fa7-48ae-a9e2-2f371656f326" Oct 06 15:14:37 crc kubenswrapper[4888]: E1006 15:14:37.530203 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" podUID="99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2" Oct 06 15:14:37 crc kubenswrapper[4888]: E1006 15:14:37.532897 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:e4c4ff39c54c0af231fb781759ab50ed86285c74d38bdea43fa75646b762d842\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" podUID="b725fd76-028e-4dc0-bbc5-8d18cf1e667b" Oct 06 15:14:44 crc kubenswrapper[4888]: I1006 15:14:44.667295 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-847bc59d9d-djwkk" Oct 06 15:14:49 crc kubenswrapper[4888]: E1006 15:14:49.148279 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed" Oct 06 15:14:49 crc kubenswrapper[4888]: E1006 15:14:49.149116 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zdv8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6859f9b676-qtk96_openstack-operators(470e257a-9b82-4ffc-88a2-974afe3d6abb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:49 crc kubenswrapper[4888]: E1006 15:14:49.608183 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:585796b996a5b6d7ad68f0cb420bf4f2ee38c9f16f194e3111c162ce91ea8a7b" Oct 06 15:14:49 crc kubenswrapper[4888]: E1006 15:14:49.608564 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:585796b996a5b6d7ad68f0cb420bf4f2ee38c9f16f194e3111c162ce91ea8a7b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhqrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-75dfd9b554-wdknr_openstack-operators(919edfbf-4b21-44cb-b821-3d3294a2beb1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:50 crc kubenswrapper[4888]: E1006 15:14:50.040157 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:6c4a37dbd0596874d442559eda2ae68411ec1e4dc39b3a125dd13fa9efb91c20" Oct 06 15:14:50 crc kubenswrapper[4888]: E1006 15:14:50.040374 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:6c4a37dbd0596874d442559eda2ae68411ec1e4dc39b3a125dd13fa9efb91c20,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dq59b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-54b4974c45-nn686_openstack-operators(c951d569-cbb5-4525-b66d-07f2473db97a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:51 crc kubenswrapper[4888]: E1006 15:14:51.519355 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f" Oct 06 15:14:51 crc kubenswrapper[4888]: E1006 15:14:51.519876 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bsp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-658588b8c9-nbssz_openstack-operators(7d3c45dc-4628-4b73-b123-dda0b0cb4d72): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:51 crc kubenswrapper[4888]: E1006 15:14:51.958018 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:5f96b563a63494082323bfced089d6589e0c89db43c6a39a2e912c79b1a278fe" Oct 06 15:14:51 crc kubenswrapper[4888]: E1006 15:14:51.958172 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:5f96b563a63494082323bfced089d6589e0c89db43c6a39a2e912c79b1a278fe,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jxk2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6cd6d7bdf5-4m69r_openstack-operators(0ab9d525-93a8-4920-a6e4-e70dfd942ce3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:53 crc kubenswrapper[4888]: E1006 15:14:53.207434 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:517ffec92b586293193d84d5c4d0ec2093be9fade5fde0fe4a41e2ea7685432c" Oct 06 15:14:53 crc kubenswrapper[4888]: E1006 15:14:53.207587 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:517ffec92b586293193d84d5c4d0ec2093be9fade5fde0fe4a41e2ea7685432c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mthdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-58c4cd55f4-kzz5z_openstack-operators(a3e1786a-e54d-4a41-a974-fab79300a4b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:53 crc kubenswrapper[4888]: E1006 15:14:53.795387 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:785670b14b19ffd7e0799dcf3e3e275329fa822d4a604eace09574f8bb1f8162" Oct 06 15:14:53 crc kubenswrapper[4888]: E1006 15:14:53.795877 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:785670b14b19ffd7e0799dcf3e3e275329fa822d4a604eace09574f8bb1f8162,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4cpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-649675d675-jxh84_openstack-operators(c7b475c9-9590-41e5-9bd4-d7f9fb04958c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:54 crc kubenswrapper[4888]: E1006 15:14:54.233023 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:f37e29d1f621c23c0d77b09076006d1e8002a77c2ff3d9b8921f893221cb1d09" Oct 06 15:14:54 crc kubenswrapper[4888]: E1006 15:14:54.233195 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f37e29d1f621c23c0d77b09076006d1e8002a77c2ff3d9b8921f893221cb1d09,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fzpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6d8b6f9b9-jsk2z_openstack-operators(23fa3eba-f10e-42b2-bc39-2df07d518a0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:54 crc kubenswrapper[4888]: E1006 15:14:54.679384 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757" Oct 06 15:14:54 crc kubenswrapper[4888]: E1006 15:14:54.679555 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvxzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-65d89cfd9f-mmfzd_openstack-operators(5a1c3c1c-2a06-49a0-9189-acbcdd0053c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:14:56 crc kubenswrapper[4888]: E1006 15:14:56.258514 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" podUID="c951d569-cbb5-4525-b66d-07f2473db97a" Oct 06 15:14:56 crc kubenswrapper[4888]: I1006 15:14:56.686524 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" event={"ID":"c951d569-cbb5-4525-b66d-07f2473db97a","Type":"ContainerStarted","Data":"c22f4bdf478f587f1bbc6074a34037c7a7fc8c32777f46929a2cb0e3d81b475a"} Oct 06 15:14:56 crc kubenswrapper[4888]: E1006 15:14:56.689833 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:6c4a37dbd0596874d442559eda2ae68411ec1e4dc39b3a125dd13fa9efb91c20\\\"\"" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" podUID="c951d569-cbb5-4525-b66d-07f2473db97a" Oct 06 15:14:57 crc kubenswrapper[4888]: E1006 15:14:57.695356 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:6c4a37dbd0596874d442559eda2ae68411ec1e4dc39b3a125dd13fa9efb91c20\\\"\"" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" podUID="c951d569-cbb5-4525-b66d-07f2473db97a" Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.420767 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" podUID="7d3c45dc-4628-4b73-b123-dda0b0cb4d72" Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.522626 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" podUID="a3e1786a-e54d-4a41-a974-fab79300a4b9" Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.526635 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" podUID="23fa3eba-f10e-42b2-bc39-2df07d518a0e" Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.526697 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" podUID="470e257a-9b82-4ffc-88a2-974afe3d6abb" Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.579848 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" podUID="c7b475c9-9590-41e5-9bd4-d7f9fb04958c" Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.585596 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" podUID="5a1c3c1c-2a06-49a0-9189-acbcdd0053c6" Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.588377 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" podUID="919edfbf-4b21-44cb-b821-3d3294a2beb1" Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.715719 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" event={"ID":"23fa3eba-f10e-42b2-bc39-2df07d518a0e","Type":"ContainerStarted","Data":"9131b0ae6aa1d27fc10b961ab0e6f3d38f49289f78538c051e82d057033c9d0d"} Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.719986 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f37e29d1f621c23c0d77b09076006d1e8002a77c2ff3d9b8921f893221cb1d09\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" podUID="23fa3eba-f10e-42b2-bc39-2df07d518a0e" Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.729690 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" event={"ID":"4871270f-8f10-42b8-880a-7ff6ab0d1476","Type":"ContainerStarted","Data":"726dd16b6bd2f542a70c4664e386ed9e580929403f55dc67879ca95afc0eb1ed"} Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.744358 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" event={"ID":"cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9","Type":"ContainerStarted","Data":"159c9ec2fcf2ffa4109e3a9765755273cf6bc4c860a481aed41bc3427de4f975"} Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.756544 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" event={"ID":"919edfbf-4b21-44cb-b821-3d3294a2beb1","Type":"ContainerStarted","Data":"4866ca68d2e70f5a2c7dd9535fb51b08ede8107c6fad14d2277a49d9e82df433"} Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.758895 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:585796b996a5b6d7ad68f0cb420bf4f2ee38c9f16f194e3111c162ce91ea8a7b\\\"\"" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" podUID="919edfbf-4b21-44cb-b821-3d3294a2beb1" Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.762982 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" event={"ID":"5a1c3c1c-2a06-49a0-9189-acbcdd0053c6","Type":"ContainerStarted","Data":"afb51264b1979a75b9bb52dfa990a7e8d2a571cbe0d302ed0ef1518eb6d3a356"} Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.765947 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757\\\"\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" podUID="5a1c3c1c-2a06-49a0-9189-acbcdd0053c6" Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.771011 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" event={"ID":"7d3c45dc-4628-4b73-b123-dda0b0cb4d72","Type":"ContainerStarted","Data":"cd45dd6970cabcefc5e887bf0c6469912e7f7f904eb34cd04f1e87c22146e451"} Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.772522 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" podUID="7d3c45dc-4628-4b73-b123-dda0b0cb4d72" Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.783194 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" event={"ID":"a3e1786a-e54d-4a41-a974-fab79300a4b9","Type":"ContainerStarted","Data":"f045807793e43790cd5a7afe5c0899e1a4edf718c8fd8bab4701d00f8d1c6fb5"} Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.787016 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:517ffec92b586293193d84d5c4d0ec2093be9fade5fde0fe4a41e2ea7685432c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" podUID="a3e1786a-e54d-4a41-a974-fab79300a4b9" Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.792026 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" event={"ID":"b798c5fc-8252-4301-b5f2-6d47107266c9","Type":"ContainerStarted","Data":"ebe411e3fc59c1c221184aaa168dd5be07cfe4af8dd3fbf164d887b7580c1aa8"} Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.801122 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" podUID="0ab9d525-93a8-4920-a6e4-e70dfd942ce3" Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.804027 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" event={"ID":"470e257a-9b82-4ffc-88a2-974afe3d6abb","Type":"ContainerStarted","Data":"dbed411a57551110df8878e14af48c5bcf5245170db59a47036818285ebd1e97"} Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.805522 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" podUID="470e257a-9b82-4ffc-88a2-974afe3d6abb" Oct 06 15:14:58 crc kubenswrapper[4888]: I1006 15:14:58.810446 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" event={"ID":"c7b475c9-9590-41e5-9bd4-d7f9fb04958c","Type":"ContainerStarted","Data":"807c61dc8efc3bb0ab896c3750e31afb87eb1f19b3415175cbed21f40ca3b848"} Oct 06 15:14:58 crc kubenswrapper[4888]: E1006 15:14:58.814156 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:785670b14b19ffd7e0799dcf3e3e275329fa822d4a604eace09574f8bb1f8162\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" podUID="c7b475c9-9590-41e5-9bd4-d7f9fb04958c" Oct 06 15:14:59 crc kubenswrapper[4888]: I1006 15:14:59.818467 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" event={"ID":"614bf51f-2fa7-48ae-a9e2-2f371656f326","Type":"ContainerStarted","Data":"b61b50d722fcb920c86a72f9cada2e6b9660c9b82d54f4756808035185b42312"} Oct 06 15:14:59 crc kubenswrapper[4888]: I1006 15:14:59.818928 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" Oct 06 15:14:59 crc kubenswrapper[4888]: I1006 15:14:59.821449 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" event={"ID":"b3acc46c-a819-4e19-8534-34855edcdbaa","Type":"ContainerStarted","Data":"a8ba8b748435c95098bc4565887619267f3f4a972d630899c04014a516f67db6"} Oct 06 15:14:59 crc kubenswrapper[4888]: I1006 15:14:59.823091 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" event={"ID":"ba1d4452-d22b-4724-93eb-bf70500f2040","Type":"ContainerStarted","Data":"4e5069994f8b1aa7197678918bc7aa8a3d5f29a5f1da563ae74fc0fbda853891"} Oct 06 15:14:59 crc kubenswrapper[4888]: I1006 15:14:59.824546 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" event={"ID":"18e5b74c-d61b-4916-a059-5daa7e2b6277","Type":"ContainerStarted","Data":"970c0df9b211558742ff97a6e421538ac6ef2cb41f90c676dda87e1a2cdd6dcf"} Oct 06 15:14:59 crc kubenswrapper[4888]: I1006 15:14:59.826373 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" event={"ID":"0ab9d525-93a8-4920-a6e4-e70dfd942ce3","Type":"ContainerStarted","Data":"2aee259328241ab132353e72a01cc9096dc46c2c221377a75e76e5ea27003fb0"} Oct 06 15:14:59 crc kubenswrapper[4888]: E1006 15:14:59.829300 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:517ffec92b586293193d84d5c4d0ec2093be9fade5fde0fe4a41e2ea7685432c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" podUID="a3e1786a-e54d-4a41-a974-fab79300a4b9" Oct 06 15:14:59 crc kubenswrapper[4888]: E1006 15:14:59.829438 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:063aae1458289d1090a77c74c2b978b9eb978b0e4062c399f0cb5434a8dd2757\\\"\"" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" podUID="5a1c3c1c-2a06-49a0-9189-acbcdd0053c6" Oct 06 15:14:59 crc kubenswrapper[4888]: E1006 15:14:59.829651 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:785670b14b19ffd7e0799dcf3e3e275329fa822d4a604eace09574f8bb1f8162\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" podUID="c7b475c9-9590-41e5-9bd4-d7f9fb04958c" Oct 06 15:14:59 crc kubenswrapper[4888]: E1006 15:14:59.829696 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:b6cef68bfaacdf992a9fa1a6b03a848a48c18cbb6ed12d95561b4b37d858b99f\\\"\"" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" podUID="7d3c45dc-4628-4b73-b123-dda0b0cb4d72" Oct 06 15:14:59 crc kubenswrapper[4888]: E1006 15:14:59.829835 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5f96b563a63494082323bfced089d6589e0c89db43c6a39a2e912c79b1a278fe\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" podUID="0ab9d525-93a8-4920-a6e4-e70dfd942ce3" Oct 06 15:14:59 crc kubenswrapper[4888]: E1006 15:14:59.829888 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f37e29d1f621c23c0d77b09076006d1e8002a77c2ff3d9b8921f893221cb1d09\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" podUID="23fa3eba-f10e-42b2-bc39-2df07d518a0e" Oct 06 15:14:59 crc kubenswrapper[4888]: I1006 15:14:59.846088 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" podStartSLOduration=5.472264602 podStartE2EDuration="28.846069128s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.777862889 +0000 UTC m=+814.590213617" lastFinishedPulling="2025-10-06 15:14:58.151667425 +0000 UTC m=+837.964018143" observedRunningTime="2025-10-06 15:14:59.843514968 +0000 UTC m=+839.655865696" watchObservedRunningTime="2025-10-06 15:14:59.846069128 +0000 UTC m=+839.658419846" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.140075 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz"] Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.141383 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.144601 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.147935 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.149741 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz"] Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.205773 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4706e942-55aa-4d9b-a703-ab2566f31e8b-secret-volume\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.206096 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2d5c\" (UniqueName: \"kubernetes.io/projected/4706e942-55aa-4d9b-a703-ab2566f31e8b-kube-api-access-w2d5c\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.206168 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4706e942-55aa-4d9b-a703-ab2566f31e8b-config-volume\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.307496 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2d5c\" (UniqueName: \"kubernetes.io/projected/4706e942-55aa-4d9b-a703-ab2566f31e8b-kube-api-access-w2d5c\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.307556 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4706e942-55aa-4d9b-a703-ab2566f31e8b-config-volume\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.307620 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4706e942-55aa-4d9b-a703-ab2566f31e8b-secret-volume\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.308711 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4706e942-55aa-4d9b-a703-ab2566f31e8b-config-volume\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.315699 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4706e942-55aa-4d9b-a703-ab2566f31e8b-secret-volume\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.322689 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2d5c\" (UniqueName: \"kubernetes.io/projected/4706e942-55aa-4d9b-a703-ab2566f31e8b-kube-api-access-w2d5c\") pod \"collect-profiles-29329395-hddtz\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.458891 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:00 crc kubenswrapper[4888]: E1006 15:15:00.836462 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5f96b563a63494082323bfced089d6589e0c89db43c6a39a2e912c79b1a278fe\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" podUID="0ab9d525-93a8-4920-a6e4-e70dfd942ce3" Oct 06 15:15:00 crc kubenswrapper[4888]: I1006 15:15:00.883092 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz"] Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.842927 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" event={"ID":"ba1d4452-d22b-4724-93eb-bf70500f2040","Type":"ContainerStarted","Data":"76ac786c4625317c8efc1ed13c1f162c46412b899d1cb3e4946674f258cd35e1"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.843519 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.846364 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" event={"ID":"216a208a-e34a-4796-a72d-79fb0dba1491","Type":"ContainerStarted","Data":"fcbda349728d57b3b06ec3e16e9b39664e7b48856dc7e643d11008660c01676f"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.846647 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.849810 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" event={"ID":"4871270f-8f10-42b8-880a-7ff6ab0d1476","Type":"ContainerStarted","Data":"5d253b4889813e5d58b8298e9102ec91dd85562e720a40c9985dd86c5b17fe6c"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.849942 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.852176 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" event={"ID":"b798c5fc-8252-4301-b5f2-6d47107266c9","Type":"ContainerStarted","Data":"54a126d2010d1cedb6630efd0d6da45a59c6ccc34cf9187228137c486e908b1c"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.852320 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.854257 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr" event={"ID":"a4a728bc-e48c-43b7-b143-aded2946ee76","Type":"ContainerStarted","Data":"000684a592f06a14fc0cb224a8c3755a894068bef98d521929ff5001285a436d"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.858913 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" event={"ID":"b3acc46c-a819-4e19-8534-34855edcdbaa","Type":"ContainerStarted","Data":"2b0bcfca08010545b2b0ae7a306e2a423632dc1c21c6b5cde8bd2cff333f2457"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.859073 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.861480 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" event={"ID":"18e5b74c-d61b-4916-a059-5daa7e2b6277","Type":"ContainerStarted","Data":"0f2705782818bacbc3145ae6cb795f9d44a2df9d45c280922396359a5a5352ba"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.861608 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.865324 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" event={"ID":"470e257a-9b82-4ffc-88a2-974afe3d6abb","Type":"ContainerStarted","Data":"baf30bcb2940fc505330f378ddca1207aa67cee5a2cb523a402d40c4535f010f"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.865599 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.868068 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" event={"ID":"cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9","Type":"ContainerStarted","Data":"d9476c46925691d82714ff7b6cd9c249cde68c13d3adca7e72fbf23c14450a13"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.868228 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.873105 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" event={"ID":"b725fd76-028e-4dc0-bbc5-8d18cf1e667b","Type":"ContainerStarted","Data":"1d6eab5c64b08c63880d0d534636608618852ed5a0c7b2607c90e6bcedcbeb87"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.873460 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.875942 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" event={"ID":"99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2","Type":"ContainerStarted","Data":"1ea8b48b0ad00d6ce35dae54ce098cbc9bfc50f74cf543bdfb45c121aba2ae6e"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.876392 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.878954 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" event={"ID":"e3c80163-0837-4095-96c9-2d51ac49b7c4","Type":"ContainerStarted","Data":"4d76ebf171d9d7ad068a3ff994071dad56202bf96afb9cf29499b1fa779f79b6"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.882118 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" event={"ID":"4706e942-55aa-4d9b-a703-ab2566f31e8b","Type":"ContainerStarted","Data":"fe63d22a61f9906b24d9f20ee40c7c6af974d0d8a40948153e06cbb9fec593dd"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.882185 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" event={"ID":"4706e942-55aa-4d9b-a703-ab2566f31e8b","Type":"ContainerStarted","Data":"bf72755b98dadff0aa2d5781b3a8dee51933d3c408c72949c3d8b853204c5035"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.887339 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" event={"ID":"edac5ad0-266b-449a-a2a8-95eb9afb0348","Type":"ContainerStarted","Data":"c34405f4fe55ba481329b655663a923937497417a56133f32bfe896d508c7c9c"} Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.887591 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.892351 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" podStartSLOduration=6.089702246 podStartE2EDuration="30.89233214s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:33.236460369 +0000 UTC m=+813.048811087" lastFinishedPulling="2025-10-06 15:14:58.039090263 +0000 UTC m=+837.851440981" observedRunningTime="2025-10-06 15:15:01.886721514 +0000 UTC m=+841.699072242" watchObservedRunningTime="2025-10-06 15:15:01.89233214 +0000 UTC m=+841.704682858" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.932091 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" podStartSLOduration=7.027441044 podStartE2EDuration="30.93207523s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.133504708 +0000 UTC m=+813.945855426" lastFinishedPulling="2025-10-06 15:14:58.038138894 +0000 UTC m=+837.850489612" observedRunningTime="2025-10-06 15:15:01.930056487 +0000 UTC m=+841.742407205" watchObservedRunningTime="2025-10-06 15:15:01.93207523 +0000 UTC m=+841.744425938" Oct 06 15:15:01 crc kubenswrapper[4888]: I1006 15:15:01.984745 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" podStartSLOduration=7.288979443 podStartE2EDuration="30.984727417s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.412143614 +0000 UTC m=+814.224494332" lastFinishedPulling="2025-10-06 15:14:58.107891578 +0000 UTC m=+837.920242306" observedRunningTime="2025-10-06 15:15:01.981455724 +0000 UTC m=+841.793806442" watchObservedRunningTime="2025-10-06 15:15:01.984727417 +0000 UTC m=+841.797078135" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.043686 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" podStartSLOduration=7.662623397 podStartE2EDuration="31.043670341s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.792967464 +0000 UTC m=+814.605318192" lastFinishedPulling="2025-10-06 15:14:58.174014418 +0000 UTC m=+837.986365136" observedRunningTime="2025-10-06 15:15:02.039468629 +0000 UTC m=+841.851819357" watchObservedRunningTime="2025-10-06 15:15:02.043670341 +0000 UTC m=+841.856021059" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.069761 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr" podStartSLOduration=6.631423476 podStartE2EDuration="30.069738701s" podCreationTimestamp="2025-10-06 15:14:32 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.675176799 +0000 UTC m=+814.487527517" lastFinishedPulling="2025-10-06 15:14:58.113492014 +0000 UTC m=+837.925842742" observedRunningTime="2025-10-06 15:15:02.068083419 +0000 UTC m=+841.880434147" watchObservedRunningTime="2025-10-06 15:15:02.069738701 +0000 UTC m=+841.882089419" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.111626 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" podStartSLOduration=7.9762238629999995 podStartE2EDuration="31.111608789s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:35.039110687 +0000 UTC m=+814.851461405" lastFinishedPulling="2025-10-06 15:14:58.174495603 +0000 UTC m=+837.986846331" observedRunningTime="2025-10-06 15:15:02.102075628 +0000 UTC m=+841.914426346" watchObservedRunningTime="2025-10-06 15:15:02.111608789 +0000 UTC m=+841.923959507" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.127750 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" podStartSLOduration=7.406273262 podStartE2EDuration="30.127733256s" podCreationTimestamp="2025-10-06 15:14:32 +0000 UTC" firstStartedPulling="2025-10-06 15:14:35.317853586 +0000 UTC m=+815.130204304" lastFinishedPulling="2025-10-06 15:14:58.03931358 +0000 UTC m=+837.851664298" observedRunningTime="2025-10-06 15:15:02.124394971 +0000 UTC m=+841.936745689" watchObservedRunningTime="2025-10-06 15:15:02.127733256 +0000 UTC m=+841.940083974" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.143836 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" podStartSLOduration=6.878611452 podStartE2EDuration="30.143819252s" podCreationTimestamp="2025-10-06 15:14:32 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.90948895 +0000 UTC m=+814.721839658" lastFinishedPulling="2025-10-06 15:14:58.17469674 +0000 UTC m=+837.987047458" observedRunningTime="2025-10-06 15:15:02.140387084 +0000 UTC m=+841.952737802" watchObservedRunningTime="2025-10-06 15:15:02.143819252 +0000 UTC m=+841.956169970" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.156041 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" podStartSLOduration=7.156559017 podStartE2EDuration="31.156025165s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.039108459 +0000 UTC m=+813.851459177" lastFinishedPulling="2025-10-06 15:14:58.038574607 +0000 UTC m=+837.850925325" observedRunningTime="2025-10-06 15:15:02.154374804 +0000 UTC m=+841.966725532" watchObservedRunningTime="2025-10-06 15:15:02.156025165 +0000 UTC m=+841.968375883" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.187899 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" podStartSLOduration=8.72987557 podStartE2EDuration="30.187884318s" podCreationTimestamp="2025-10-06 15:14:32 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.698992148 +0000 UTC m=+814.511342866" lastFinishedPulling="2025-10-06 15:14:56.157000896 +0000 UTC m=+835.969351614" observedRunningTime="2025-10-06 15:15:02.183782469 +0000 UTC m=+841.996133187" watchObservedRunningTime="2025-10-06 15:15:02.187884318 +0000 UTC m=+842.000235036" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.207901 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" podStartSLOduration=4.534533942 podStartE2EDuration="31.207883487s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.853932542 +0000 UTC m=+814.666283250" lastFinishedPulling="2025-10-06 15:15:01.527282077 +0000 UTC m=+841.339632795" observedRunningTime="2025-10-06 15:15:02.203298473 +0000 UTC m=+842.015649191" watchObservedRunningTime="2025-10-06 15:15:02.207883487 +0000 UTC m=+842.020234205" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.221660 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" podStartSLOduration=7.9488344 podStartE2EDuration="31.221640109s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.909349825 +0000 UTC m=+814.721700553" lastFinishedPulling="2025-10-06 15:14:58.182155534 +0000 UTC m=+837.994506262" observedRunningTime="2025-10-06 15:15:02.220123562 +0000 UTC m=+842.032474280" watchObservedRunningTime="2025-10-06 15:15:02.221640109 +0000 UTC m=+842.033990827" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.896262 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" event={"ID":"e3c80163-0837-4095-96c9-2d51ac49b7c4","Type":"ContainerStarted","Data":"256c5f67449cfcb00e2ad6484bbeba54f3fc4d050db2ac0a358ab561c63c3f5d"} Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.896693 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.898386 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" event={"ID":"919edfbf-4b21-44cb-b821-3d3294a2beb1","Type":"ContainerStarted","Data":"ae25e84a5242b619641fd0deb16530fb684dcc2156cf2fba790bde6913acd3db"} Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.899280 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.900969 4888 generic.go:334] "Generic (PLEG): container finished" podID="4706e942-55aa-4d9b-a703-ab2566f31e8b" containerID="fe63d22a61f9906b24d9f20ee40c7c6af974d0d8a40948153e06cbb9fec593dd" exitCode=0 Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.901687 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" event={"ID":"4706e942-55aa-4d9b-a703-ab2566f31e8b","Type":"ContainerDied","Data":"fe63d22a61f9906b24d9f20ee40c7c6af974d0d8a40948153e06cbb9fec593dd"} Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.917433 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" podStartSLOduration=7.939614112 podStartE2EDuration="31.917406508s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.136789882 +0000 UTC m=+813.949140600" lastFinishedPulling="2025-10-06 15:14:58.114582278 +0000 UTC m=+837.926932996" observedRunningTime="2025-10-06 15:15:02.915234709 +0000 UTC m=+842.727585447" watchObservedRunningTime="2025-10-06 15:15:02.917406508 +0000 UTC m=+842.729757226" Oct 06 15:15:02 crc kubenswrapper[4888]: I1006 15:15:02.948747 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" podStartSLOduration=4.562463591 podStartE2EDuration="31.948711552s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.142961266 +0000 UTC m=+813.955311994" lastFinishedPulling="2025-10-06 15:15:01.529209237 +0000 UTC m=+841.341559955" observedRunningTime="2025-10-06 15:15:02.940927958 +0000 UTC m=+842.753278676" watchObservedRunningTime="2025-10-06 15:15:02.948711552 +0000 UTC m=+842.761062270" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.245972 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.361573 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4706e942-55aa-4d9b-a703-ab2566f31e8b-config-volume\") pod \"4706e942-55aa-4d9b-a703-ab2566f31e8b\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.362374 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4706e942-55aa-4d9b-a703-ab2566f31e8b-secret-volume\") pod \"4706e942-55aa-4d9b-a703-ab2566f31e8b\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.363221 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2d5c\" (UniqueName: \"kubernetes.io/projected/4706e942-55aa-4d9b-a703-ab2566f31e8b-kube-api-access-w2d5c\") pod \"4706e942-55aa-4d9b-a703-ab2566f31e8b\" (UID: \"4706e942-55aa-4d9b-a703-ab2566f31e8b\") " Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.362267 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4706e942-55aa-4d9b-a703-ab2566f31e8b-config-volume" (OuterVolumeSpecName: "config-volume") pod "4706e942-55aa-4d9b-a703-ab2566f31e8b" (UID: "4706e942-55aa-4d9b-a703-ab2566f31e8b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.381083 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4706e942-55aa-4d9b-a703-ab2566f31e8b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4706e942-55aa-4d9b-a703-ab2566f31e8b" (UID: "4706e942-55aa-4d9b-a703-ab2566f31e8b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.389591 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4706e942-55aa-4d9b-a703-ab2566f31e8b-kube-api-access-w2d5c" (OuterVolumeSpecName: "kube-api-access-w2d5c") pod "4706e942-55aa-4d9b-a703-ab2566f31e8b" (UID: "4706e942-55aa-4d9b-a703-ab2566f31e8b"). InnerVolumeSpecName "kube-api-access-w2d5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.465055 4888 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4706e942-55aa-4d9b-a703-ab2566f31e8b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.465378 4888 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4706e942-55aa-4d9b-a703-ab2566f31e8b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.465441 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2d5c\" (UniqueName: \"kubernetes.io/projected/4706e942-55aa-4d9b-a703-ab2566f31e8b-kube-api-access-w2d5c\") on node \"crc\" DevicePath \"\"" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.913836 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" event={"ID":"4706e942-55aa-4d9b-a703-ab2566f31e8b","Type":"ContainerDied","Data":"bf72755b98dadff0aa2d5781b3a8dee51933d3c408c72949c3d8b853204c5035"} Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.914415 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf72755b98dadff0aa2d5781b3a8dee51933d3c408c72949c3d8b853204c5035" Oct 06 15:15:03 crc kubenswrapper[4888]: I1006 15:15:03.914066 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz" Oct 06 15:15:08 crc kubenswrapper[4888]: I1006 15:15:08.949254 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" event={"ID":"c951d569-cbb5-4525-b66d-07f2473db97a","Type":"ContainerStarted","Data":"c871672eaf5a3de3e8f7a634c2cf5d93693fa979fd7ad8ef8336faf506c88dd6"} Oct 06 15:15:08 crc kubenswrapper[4888]: I1006 15:15:08.950049 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" Oct 06 15:15:08 crc kubenswrapper[4888]: I1006 15:15:08.969213 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" podStartSLOduration=3.579720246 podStartE2EDuration="37.969195147s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.105650202 +0000 UTC m=+813.918000920" lastFinishedPulling="2025-10-06 15:15:08.495125083 +0000 UTC m=+848.307475821" observedRunningTime="2025-10-06 15:15:08.96515949 +0000 UTC m=+848.777510218" watchObservedRunningTime="2025-10-06 15:15:08.969195147 +0000 UTC m=+848.781545865" Oct 06 15:15:11 crc kubenswrapper[4888]: I1006 15:15:11.790610 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5dc44df7d5-6tfdd" Oct 06 15:15:11 crc kubenswrapper[4888]: I1006 15:15:11.858433 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-76d5b87f47-gw6sx" Oct 06 15:15:11 crc kubenswrapper[4888]: I1006 15:15:11.970001 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" event={"ID":"7d3c45dc-4628-4b73-b123-dda0b0cb4d72","Type":"ContainerStarted","Data":"e1d062661be800ccc7a6d41748f742c6929d428efc141c823274648fb9f2e444"} Oct 06 15:15:11 crc kubenswrapper[4888]: I1006 15:15:11.970332 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7d4d4f8d-xwmzg" Oct 06 15:15:11 crc kubenswrapper[4888]: I1006 15:15:11.970722 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:15:11 crc kubenswrapper[4888]: I1006 15:15:11.999166 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b5ccf6d9c-4p87x" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.011146 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" podStartSLOduration=4.915175617 podStartE2EDuration="41.011125381s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:35.302002778 +0000 UTC m=+815.114353496" lastFinishedPulling="2025-10-06 15:15:11.397952542 +0000 UTC m=+851.210303260" observedRunningTime="2025-10-06 15:15:12.010322636 +0000 UTC m=+851.822673364" watchObservedRunningTime="2025-10-06 15:15:12.011125381 +0000 UTC m=+851.823476109" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.020399 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-wdknr" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.180365 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-8d984cc4d-wwg8m" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.379724 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7c7fc454ff-z5nwh" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.410414 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7468f855d8-pdwl5" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.489613 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-qtk96" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.748295 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-54689d9f88-m5c7d" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.847397 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5d4d74dd89-zdzkx" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.865626 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-c6fgp" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.916075 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6cbc6dd547-vbgpl" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.978488 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" event={"ID":"23fa3eba-f10e-42b2-bc39-2df07d518a0e","Type":"ContainerStarted","Data":"cf4685af390e48846953dacd31b3e1682d1a76e6e1e481e915992f0e242ae9ef"} Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.978675 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.980246 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" event={"ID":"c7b475c9-9590-41e5-9bd4-d7f9fb04958c","Type":"ContainerStarted","Data":"d10954f5cbbdb95415c2950bbea2e33f1e501b2bd96029cf10fba9eeba5da978"} Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.980535 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" Oct 06 15:15:12 crc kubenswrapper[4888]: I1006 15:15:12.997731 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" podStartSLOduration=4.432616507 podStartE2EDuration="41.997714598s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.772981036 +0000 UTC m=+814.585331754" lastFinishedPulling="2025-10-06 15:15:12.338079117 +0000 UTC m=+852.150429845" observedRunningTime="2025-10-06 15:15:12.994615911 +0000 UTC m=+852.806966629" watchObservedRunningTime="2025-10-06 15:15:12.997714598 +0000 UTC m=+852.810065316" Oct 06 15:15:13 crc kubenswrapper[4888]: I1006 15:15:13.011848 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" podStartSLOduration=3.661639843 podStartE2EDuration="42.011830391s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.142585215 +0000 UTC m=+813.954935933" lastFinishedPulling="2025-10-06 15:15:12.492775773 +0000 UTC m=+852.305126481" observedRunningTime="2025-10-06 15:15:13.011281214 +0000 UTC m=+852.823631932" watchObservedRunningTime="2025-10-06 15:15:13.011830391 +0000 UTC m=+852.824181109" Oct 06 15:15:13 crc kubenswrapper[4888]: I1006 15:15:13.919267 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4" Oct 06 15:15:14 crc kubenswrapper[4888]: I1006 15:15:14.997244 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" event={"ID":"5a1c3c1c-2a06-49a0-9189-acbcdd0053c6","Type":"ContainerStarted","Data":"e0c781d918ee7a79270ef54b02d13320c2ac9a5ce575e9de808c1467ef8a666f"} Oct 06 15:15:14 crc kubenswrapper[4888]: I1006 15:15:14.997468 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" Oct 06 15:15:15 crc kubenswrapper[4888]: I1006 15:15:15.017228 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" podStartSLOduration=3.735509308 podStartE2EDuration="44.017212978s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.215602912 +0000 UTC m=+814.027953630" lastFinishedPulling="2025-10-06 15:15:14.497306582 +0000 UTC m=+854.309657300" observedRunningTime="2025-10-06 15:15:15.012078097 +0000 UTC m=+854.824428805" watchObservedRunningTime="2025-10-06 15:15:15.017212978 +0000 UTC m=+854.829563696" Oct 06 15:15:16 crc kubenswrapper[4888]: I1006 15:15:16.003851 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" event={"ID":"a3e1786a-e54d-4a41-a974-fab79300a4b9","Type":"ContainerStarted","Data":"86421643de8a30e6b48279c9604db9e86b19f2c6e8127a1c7eeaae6695178c5d"} Oct 06 15:15:16 crc kubenswrapper[4888]: I1006 15:15:16.004326 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" Oct 06 15:15:16 crc kubenswrapper[4888]: I1006 15:15:16.021324 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" podStartSLOduration=3.91201782 podStartE2EDuration="45.021304366s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.220281978 +0000 UTC m=+814.032632696" lastFinishedPulling="2025-10-06 15:15:15.329568524 +0000 UTC m=+855.141919242" observedRunningTime="2025-10-06 15:15:16.017834355 +0000 UTC m=+855.830185083" watchObservedRunningTime="2025-10-06 15:15:16.021304366 +0000 UTC m=+855.833655084" Oct 06 15:15:17 crc kubenswrapper[4888]: I1006 15:15:17.010910 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" event={"ID":"0ab9d525-93a8-4920-a6e4-e70dfd942ce3","Type":"ContainerStarted","Data":"5fb408c2c4de6d71ac9d18c4b47b8c9b87c4ac374d404b5df0dd15ab693a9235"} Oct 06 15:15:17 crc kubenswrapper[4888]: I1006 15:15:17.031332 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" podStartSLOduration=3.792249093 podStartE2EDuration="46.031310419s" podCreationTimestamp="2025-10-06 15:14:31 +0000 UTC" firstStartedPulling="2025-10-06 15:14:34.119183008 +0000 UTC m=+813.931533726" lastFinishedPulling="2025-10-06 15:15:16.358244334 +0000 UTC m=+856.170595052" observedRunningTime="2025-10-06 15:15:17.026049722 +0000 UTC m=+856.838400450" watchObservedRunningTime="2025-10-06 15:15:17.031310419 +0000 UTC m=+856.843661137" Oct 06 15:15:21 crc kubenswrapper[4888]: I1006 15:15:21.854077 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-54b4974c45-nn686" Oct 06 15:15:21 crc kubenswrapper[4888]: I1006 15:15:21.961436 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-58c4cd55f4-kzz5z" Oct 06 15:15:22 crc kubenswrapper[4888]: I1006 15:15:22.144978 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" Oct 06 15:15:22 crc kubenswrapper[4888]: I1006 15:15:22.146593 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6cd6d7bdf5-4m69r" Oct 06 15:15:22 crc kubenswrapper[4888]: I1006 15:15:22.166049 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-65d89cfd9f-mmfzd" Oct 06 15:15:22 crc kubenswrapper[4888]: I1006 15:15:22.209509 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-649675d675-jxh84" Oct 06 15:15:22 crc kubenswrapper[4888]: I1006 15:15:22.777170 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6d8b6f9b9-jsk2z" Oct 06 15:15:23 crc kubenswrapper[4888]: I1006 15:15:23.700893 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-658588b8c9-nbssz" Oct 06 15:15:32 crc kubenswrapper[4888]: I1006 15:15:32.563465 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:15:32 crc kubenswrapper[4888]: I1006 15:15:32.564015 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.737337 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-knjf2"] Oct 06 15:15:40 crc kubenswrapper[4888]: E1006 15:15:40.739106 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4706e942-55aa-4d9b-a703-ab2566f31e8b" containerName="collect-profiles" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.739186 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4706e942-55aa-4d9b-a703-ab2566f31e8b" containerName="collect-profiles" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.739366 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4706e942-55aa-4d9b-a703-ab2566f31e8b" containerName="collect-profiles" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.740119 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.746848 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-pshq5" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.751603 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-knjf2"] Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.752270 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.753141 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.760177 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.846941 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mwk2g"] Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.851898 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.854658 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.884759 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6l9l\" (UniqueName: \"kubernetes.io/projected/57bda89f-ca60-4250-90f4-b6bff2713ac0-kube-api-access-n6l9l\") pod \"dnsmasq-dns-675f4bcbfc-knjf2\" (UID: \"57bda89f-ca60-4250-90f4-b6bff2713ac0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.884910 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bda89f-ca60-4250-90f4-b6bff2713ac0-config\") pod \"dnsmasq-dns-675f4bcbfc-knjf2\" (UID: \"57bda89f-ca60-4250-90f4-b6bff2713ac0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.907410 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mwk2g"] Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.986713 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bda89f-ca60-4250-90f4-b6bff2713ac0-config\") pod \"dnsmasq-dns-675f4bcbfc-knjf2\" (UID: \"57bda89f-ca60-4250-90f4-b6bff2713ac0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.986784 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6l9l\" (UniqueName: \"kubernetes.io/projected/57bda89f-ca60-4250-90f4-b6bff2713ac0-kube-api-access-n6l9l\") pod \"dnsmasq-dns-675f4bcbfc-knjf2\" (UID: \"57bda89f-ca60-4250-90f4-b6bff2713ac0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.987227 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dq5\" (UniqueName: \"kubernetes.io/projected/1937aa19-8288-41a0-98b5-049842d7ee4f-kube-api-access-m8dq5\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.987545 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-config\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.987597 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:40 crc kubenswrapper[4888]: I1006 15:15:40.988391 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bda89f-ca60-4250-90f4-b6bff2713ac0-config\") pod \"dnsmasq-dns-675f4bcbfc-knjf2\" (UID: \"57bda89f-ca60-4250-90f4-b6bff2713ac0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.010631 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6l9l\" (UniqueName: \"kubernetes.io/projected/57bda89f-ca60-4250-90f4-b6bff2713ac0-kube-api-access-n6l9l\") pod \"dnsmasq-dns-675f4bcbfc-knjf2\" (UID: \"57bda89f-ca60-4250-90f4-b6bff2713ac0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.085197 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.088143 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dq5\" (UniqueName: \"kubernetes.io/projected/1937aa19-8288-41a0-98b5-049842d7ee4f-kube-api-access-m8dq5\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.088851 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-config\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.089610 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.089562 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-config\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.090192 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.110623 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dq5\" (UniqueName: \"kubernetes.io/projected/1937aa19-8288-41a0-98b5-049842d7ee4f-kube-api-access-m8dq5\") pod \"dnsmasq-dns-78dd6ddcc-mwk2g\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.173216 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.530777 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-knjf2"] Oct 06 15:15:41 crc kubenswrapper[4888]: W1006 15:15:41.537536 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57bda89f_ca60_4250_90f4_b6bff2713ac0.slice/crio-178f9283314fe4fc096032a3eacc33b26298456414c71f719298196a9761a5e4 WatchSource:0}: Error finding container 178f9283314fe4fc096032a3eacc33b26298456414c71f719298196a9761a5e4: Status 404 returned error can't find the container with id 178f9283314fe4fc096032a3eacc33b26298456414c71f719298196a9761a5e4 Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.539428 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:15:41 crc kubenswrapper[4888]: I1006 15:15:41.666262 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mwk2g"] Oct 06 15:15:41 crc kubenswrapper[4888]: W1006 15:15:41.668572 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1937aa19_8288_41a0_98b5_049842d7ee4f.slice/crio-130a66b12f58eef882a7970144ebcb52b4470cec2d3b5a2dd36ba2bcb3aa5e66 WatchSource:0}: Error finding container 130a66b12f58eef882a7970144ebcb52b4470cec2d3b5a2dd36ba2bcb3aa5e66: Status 404 returned error can't find the container with id 130a66b12f58eef882a7970144ebcb52b4470cec2d3b5a2dd36ba2bcb3aa5e66 Oct 06 15:15:42 crc kubenswrapper[4888]: I1006 15:15:42.200790 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" event={"ID":"57bda89f-ca60-4250-90f4-b6bff2713ac0","Type":"ContainerStarted","Data":"178f9283314fe4fc096032a3eacc33b26298456414c71f719298196a9761a5e4"} Oct 06 15:15:42 crc kubenswrapper[4888]: I1006 15:15:42.202204 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" event={"ID":"1937aa19-8288-41a0-98b5-049842d7ee4f","Type":"ContainerStarted","Data":"130a66b12f58eef882a7970144ebcb52b4470cec2d3b5a2dd36ba2bcb3aa5e66"} Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.686979 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-knjf2"] Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.713449 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-l7jvm"] Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.714753 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.732515 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-config\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.732597 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2skd\" (UniqueName: \"kubernetes.io/projected/772313fa-9fed-485d-9a5e-8e4b877d0508-kube-api-access-z2skd\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.732623 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.739328 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-l7jvm"] Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.834073 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-config\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.834169 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2skd\" (UniqueName: \"kubernetes.io/projected/772313fa-9fed-485d-9a5e-8e4b877d0508-kube-api-access-z2skd\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.834197 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.835344 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.836085 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-config\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:43 crc kubenswrapper[4888]: I1006 15:15:43.873860 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2skd\" (UniqueName: \"kubernetes.io/projected/772313fa-9fed-485d-9a5e-8e4b877d0508-kube-api-access-z2skd\") pod \"dnsmasq-dns-5ccc8479f9-l7jvm\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.029044 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mwk2g"] Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.041206 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.068278 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ss4h7"] Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.069456 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.085585 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ss4h7"] Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.138553 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.138627 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-config\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.138678 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrk84\" (UniqueName: \"kubernetes.io/projected/ac5e3688-6436-419f-9267-a341ff87652f-kube-api-access-mrk84\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.240036 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-config\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.240424 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrk84\" (UniqueName: \"kubernetes.io/projected/ac5e3688-6436-419f-9267-a341ff87652f-kube-api-access-mrk84\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.240505 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.241887 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-config\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.242113 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.312867 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrk84\" (UniqueName: \"kubernetes.io/projected/ac5e3688-6436-419f-9267-a341ff87652f-kube-api-access-mrk84\") pod \"dnsmasq-dns-57d769cc4f-ss4h7\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.473322 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.873767 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.875784 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.883673 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.884929 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m5r5h" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.885098 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.885318 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.885491 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.885642 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.886407 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.918932 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:15:44 crc kubenswrapper[4888]: I1006 15:15:44.969426 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-l7jvm"] Oct 06 15:15:45 crc kubenswrapper[4888]: W1006 15:15:45.011480 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod772313fa_9fed_485d_9a5e_8e4b877d0508.slice/crio-9623f3206bbc33a18c8d2f6bff9e3a0d62b23299432cd2e9fd49ad9b13aa2225 WatchSource:0}: Error finding container 9623f3206bbc33a18c8d2f6bff9e3a0d62b23299432cd2e9fd49ad9b13aa2225: Status 404 returned error can't find the container with id 9623f3206bbc33a18c8d2f6bff9e3a0d62b23299432cd2e9fd49ad9b13aa2225 Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.054750 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.054940 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056213 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056425 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056547 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056606 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056714 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056776 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc827\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-kube-api-access-xc827\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056852 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056919 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.056955 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.158410 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.158546 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc827\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-kube-api-access-xc827\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.158581 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.158602 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.158650 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.158717 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.158772 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.158918 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.159020 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.159072 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.159104 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.160094 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.160453 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.160771 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.160780 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.161813 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.164451 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.165414 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.166590 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.168154 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.170259 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.182771 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc827\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-kube-api-access-xc827\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.198121 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.212359 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ss4h7"] Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.225029 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.226885 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: W1006 15:15:45.235160 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac5e3688_6436_419f_9267_a341ff87652f.slice/crio-380c7b64e1ab6e25f28302dc26d9886ec0665480a8f2d105937f8cbcf9ed54b2 WatchSource:0}: Error finding container 380c7b64e1ab6e25f28302dc26d9886ec0665480a8f2d105937f8cbcf9ed54b2: Status 404 returned error can't find the container with id 380c7b64e1ab6e25f28302dc26d9886ec0665480a8f2d105937f8cbcf9ed54b2 Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.236242 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.236551 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.236727 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.239466 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.239704 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.239857 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-b8dpc" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.239883 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.256206 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.261620 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.290932 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" event={"ID":"772313fa-9fed-485d-9a5e-8e4b877d0508","Type":"ContainerStarted","Data":"9623f3206bbc33a18c8d2f6bff9e3a0d62b23299432cd2e9fd49ad9b13aa2225"} Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.361781 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.365355 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-config-data\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.365494 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ed3909-71e7-40e7-9e97-e9917d621080-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.365875 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ed3909-71e7-40e7-9e97-e9917d621080-pod-info\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.366022 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbd2x\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-kube-api-access-hbd2x\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.366123 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.366222 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.366489 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.366594 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-server-conf\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.366716 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.366856 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473108 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473466 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-server-conf\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473486 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473514 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473540 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473564 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473590 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-config-data\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473609 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ed3909-71e7-40e7-9e97-e9917d621080-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473644 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ed3909-71e7-40e7-9e97-e9917d621080-pod-info\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473676 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbd2x\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-kube-api-access-hbd2x\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473706 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.473988 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.475754 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.476100 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-config-data\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.477119 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.481441 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.483303 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.484682 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ed3909-71e7-40e7-9e97-e9917d621080-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.485285 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ed3909-71e7-40e7-9e97-e9917d621080-pod-info\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.485489 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.494212 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-server-conf\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.497851 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbd2x\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-kube-api-access-hbd2x\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.505567 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.605914 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 15:15:45 crc kubenswrapper[4888]: I1006 15:15:45.751093 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:15:45 crc kubenswrapper[4888]: W1006 15:15:45.782613 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf44ccc0c_19ed_4959_ac2c_46842cd27fc1.slice/crio-57cf5f2d5a2b44ef0ee059f8db3e6b4a80cb0908a99f484199bbdaed25471d64 WatchSource:0}: Error finding container 57cf5f2d5a2b44ef0ee059f8db3e6b4a80cb0908a99f484199bbdaed25471d64: Status 404 returned error can't find the container with id 57cf5f2d5a2b44ef0ee059f8db3e6b4a80cb0908a99f484199bbdaed25471d64 Oct 06 15:15:46 crc kubenswrapper[4888]: I1006 15:15:46.165505 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:15:46 crc kubenswrapper[4888]: W1006 15:15:46.189579 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91ed3909_71e7_40e7_9e97_e9917d621080.slice/crio-e22186ba0d6ff3d423cd0671c6928569f65cf704b0b4c540357b44984871aeb9 WatchSource:0}: Error finding container e22186ba0d6ff3d423cd0671c6928569f65cf704b0b4c540357b44984871aeb9: Status 404 returned error can't find the container with id e22186ba0d6ff3d423cd0671c6928569f65cf704b0b4c540357b44984871aeb9 Oct 06 15:15:46 crc kubenswrapper[4888]: I1006 15:15:46.305324 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" event={"ID":"ac5e3688-6436-419f-9267-a341ff87652f","Type":"ContainerStarted","Data":"380c7b64e1ab6e25f28302dc26d9886ec0665480a8f2d105937f8cbcf9ed54b2"} Oct 06 15:15:46 crc kubenswrapper[4888]: I1006 15:15:46.316354 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ed3909-71e7-40e7-9e97-e9917d621080","Type":"ContainerStarted","Data":"e22186ba0d6ff3d423cd0671c6928569f65cf704b0b4c540357b44984871aeb9"} Oct 06 15:15:46 crc kubenswrapper[4888]: I1006 15:15:46.318207 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44ccc0c-19ed-4959-ac2c-46842cd27fc1","Type":"ContainerStarted","Data":"57cf5f2d5a2b44ef0ee059f8db3e6b4a80cb0908a99f484199bbdaed25471d64"} Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.240412 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.242853 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.245436 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.247597 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.250090 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-z8k56" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.250358 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.254715 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.255066 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.273367 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.438945 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.439053 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmt2\" (UniqueName: \"kubernetes.io/projected/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-kube-api-access-jsmt2\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.439100 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.439139 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.439170 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.439201 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.439230 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.439363 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.439429 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.540683 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.541043 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.541160 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.542507 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.542592 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.542687 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsmt2\" (UniqueName: \"kubernetes.io/projected/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-kube-api-access-jsmt2\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.542750 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.542825 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.542856 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.542906 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.543048 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.543822 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.544669 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.553913 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.553942 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.560308 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.569814 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsmt2\" (UniqueName: \"kubernetes.io/projected/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-kube-api-access-jsmt2\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.571395 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.572015 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.875202 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.895035 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.896601 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.902327 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-559r9" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.902835 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.905967 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.906205 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 06 15:15:47 crc kubenswrapper[4888]: I1006 15:15:47.914359 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.051974 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wlkx\" (UniqueName: \"kubernetes.io/projected/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-kube-api-access-4wlkx\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.052047 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-kolla-config\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.052114 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.052129 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-config-data-default\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.052151 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-secrets\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.052166 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.052204 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.052223 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.052240 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.158561 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wlkx\" (UniqueName: \"kubernetes.io/projected/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-kube-api-access-4wlkx\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.158681 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-kolla-config\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.158784 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.158821 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-config-data-default\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.158851 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-secrets\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.158871 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.158987 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.159016 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.159050 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.161469 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.162279 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-kolla-config\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.163286 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.163436 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.172474 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-config-data-default\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.179692 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.198489 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-secrets\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.201028 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.231589 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wlkx\" (UniqueName: \"kubernetes.io/projected/b6265e0c-c180-4f1b-9d3b-73321ed1caf5-kube-api-access-4wlkx\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.263637 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"b6265e0c-c180-4f1b-9d3b-73321ed1caf5\") " pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.266755 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.268452 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.285629 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.286891 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8jhz8" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.287113 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.294529 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.364229 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0f5f3-c656-4e89-a5d6-73443af4afc4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.364286 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e0f5f3-c656-4e89-a5d6-73443af4afc4-kolla-config\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.364323 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxd8c\" (UniqueName: \"kubernetes.io/projected/49e0f5f3-c656-4e89-a5d6-73443af4afc4-kube-api-access-bxd8c\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.364348 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49e0f5f3-c656-4e89-a5d6-73443af4afc4-config-data\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.364364 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0f5f3-c656-4e89-a5d6-73443af4afc4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.472714 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0f5f3-c656-4e89-a5d6-73443af4afc4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.472889 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e0f5f3-c656-4e89-a5d6-73443af4afc4-kolla-config\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.472990 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxd8c\" (UniqueName: \"kubernetes.io/projected/49e0f5f3-c656-4e89-a5d6-73443af4afc4-kube-api-access-bxd8c\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.473040 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49e0f5f3-c656-4e89-a5d6-73443af4afc4-config-data\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.473070 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0f5f3-c656-4e89-a5d6-73443af4afc4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.477343 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e0f5f3-c656-4e89-a5d6-73443af4afc4-combined-ca-bundle\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.477684 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/49e0f5f3-c656-4e89-a5d6-73443af4afc4-kolla-config\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.477777 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/49e0f5f3-c656-4e89-a5d6-73443af4afc4-memcached-tls-certs\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.477982 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49e0f5f3-c656-4e89-a5d6-73443af4afc4-config-data\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.503930 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxd8c\" (UniqueName: \"kubernetes.io/projected/49e0f5f3-c656-4e89-a5d6-73443af4afc4-kube-api-access-bxd8c\") pod \"memcached-0\" (UID: \"49e0f5f3-c656-4e89-a5d6-73443af4afc4\") " pod="openstack/memcached-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.524292 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 06 15:15:48 crc kubenswrapper[4888]: I1006 15:15:48.618806 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 06 15:15:50 crc kubenswrapper[4888]: I1006 15:15:50.117965 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:15:50 crc kubenswrapper[4888]: I1006 15:15:50.124865 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 15:15:50 crc kubenswrapper[4888]: I1006 15:15:50.139816 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:15:50 crc kubenswrapper[4888]: I1006 15:15:50.140583 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g26wn" Oct 06 15:15:50 crc kubenswrapper[4888]: I1006 15:15:50.172690 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dxh\" (UniqueName: \"kubernetes.io/projected/6a9289a6-f236-4e76-ac2b-4ef38163f845-kube-api-access-m2dxh\") pod \"kube-state-metrics-0\" (UID: \"6a9289a6-f236-4e76-ac2b-4ef38163f845\") " pod="openstack/kube-state-metrics-0" Oct 06 15:15:50 crc kubenswrapper[4888]: I1006 15:15:50.280149 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2dxh\" (UniqueName: \"kubernetes.io/projected/6a9289a6-f236-4e76-ac2b-4ef38163f845-kube-api-access-m2dxh\") pod \"kube-state-metrics-0\" (UID: \"6a9289a6-f236-4e76-ac2b-4ef38163f845\") " pod="openstack/kube-state-metrics-0" Oct 06 15:15:50 crc kubenswrapper[4888]: I1006 15:15:50.311538 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2dxh\" (UniqueName: \"kubernetes.io/projected/6a9289a6-f236-4e76-ac2b-4ef38163f845-kube-api-access-m2dxh\") pod \"kube-state-metrics-0\" (UID: \"6a9289a6-f236-4e76-ac2b-4ef38163f845\") " pod="openstack/kube-state-metrics-0" Oct 06 15:15:50 crc kubenswrapper[4888]: I1006 15:15:50.470522 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.599114 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nzvpg"] Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.600619 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.607715 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.608187 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hkmtm" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.614433 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mczwz"] Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.616070 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.621095 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nzvpg"] Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.627282 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.682813 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mczwz"] Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728410 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82705879-10de-4927-946c-c55766069d1b-combined-ca-bundle\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728468 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/737bd423-e5c2-4a4e-9463-56e1bb95b101-scripts\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728496 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-run-ovn\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728533 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-run\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728558 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-etc-ovs\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728583 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-log\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728625 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/82705879-10de-4927-946c-c55766069d1b-ovn-controller-tls-certs\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728711 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wsp\" (UniqueName: \"kubernetes.io/projected/82705879-10de-4927-946c-c55766069d1b-kube-api-access-w6wsp\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728781 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-lib\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728844 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trtcl\" (UniqueName: \"kubernetes.io/projected/737bd423-e5c2-4a4e-9463-56e1bb95b101-kube-api-access-trtcl\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728874 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-log-ovn\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728914 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82705879-10de-4927-946c-c55766069d1b-scripts\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.728941 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-run\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830579 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-log-ovn\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830639 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82705879-10de-4927-946c-c55766069d1b-scripts\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830669 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-run\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830703 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82705879-10de-4927-946c-c55766069d1b-combined-ca-bundle\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830737 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/737bd423-e5c2-4a4e-9463-56e1bb95b101-scripts\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830757 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-run-ovn\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830781 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-run\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830811 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-etc-ovs\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830830 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-log\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830862 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/82705879-10de-4927-946c-c55766069d1b-ovn-controller-tls-certs\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830895 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wsp\" (UniqueName: \"kubernetes.io/projected/82705879-10de-4927-946c-c55766069d1b-kube-api-access-w6wsp\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830914 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-lib\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.830938 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trtcl\" (UniqueName: \"kubernetes.io/projected/737bd423-e5c2-4a4e-9463-56e1bb95b101-kube-api-access-trtcl\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.831284 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-run\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.832324 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-run-ovn\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.833144 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-etc-ovs\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.833216 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-run\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.833847 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/82705879-10de-4927-946c-c55766069d1b-scripts\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.838144 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/82705879-10de-4927-946c-c55766069d1b-var-log-ovn\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.839319 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-log\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.839540 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/737bd423-e5c2-4a4e-9463-56e1bb95b101-var-lib\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.849169 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/737bd423-e5c2-4a4e-9463-56e1bb95b101-scripts\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.853431 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82705879-10de-4927-946c-c55766069d1b-combined-ca-bundle\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.853982 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/82705879-10de-4927-946c-c55766069d1b-ovn-controller-tls-certs\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.854031 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wsp\" (UniqueName: \"kubernetes.io/projected/82705879-10de-4927-946c-c55766069d1b-kube-api-access-w6wsp\") pod \"ovn-controller-nzvpg\" (UID: \"82705879-10de-4927-946c-c55766069d1b\") " pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.856288 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trtcl\" (UniqueName: \"kubernetes.io/projected/737bd423-e5c2-4a4e-9463-56e1bb95b101-kube-api-access-trtcl\") pod \"ovn-controller-ovs-mczwz\" (UID: \"737bd423-e5c2-4a4e-9463-56e1bb95b101\") " pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.925248 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nzvpg" Oct 06 15:15:52 crc kubenswrapper[4888]: I1006 15:15:52.984632 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.281545 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.283170 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.285603 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-hhfhb" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.286031 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.286363 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.286449 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.286527 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.302829 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.455550 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5b21496-4576-4872-9a7b-a0fa475466a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.455866 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.455953 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.456064 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5b21496-4576-4872-9a7b-a0fa475466a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.456175 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b21496-4576-4872-9a7b-a0fa475466a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.456311 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdbt6\" (UniqueName: \"kubernetes.io/projected/e5b21496-4576-4872-9a7b-a0fa475466a6-kube-api-access-bdbt6\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.456439 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.456553 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.559679 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5b21496-4576-4872-9a7b-a0fa475466a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.558996 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e5b21496-4576-4872-9a7b-a0fa475466a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.560411 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.560525 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.560635 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5b21496-4576-4872-9a7b-a0fa475466a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.560747 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b21496-4576-4872-9a7b-a0fa475466a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.560880 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.561456 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5b21496-4576-4872-9a7b-a0fa475466a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.560900 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdbt6\" (UniqueName: \"kubernetes.io/projected/e5b21496-4576-4872-9a7b-a0fa475466a6-kube-api-access-bdbt6\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.561638 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.561657 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5b21496-4576-4872-9a7b-a0fa475466a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.561734 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.565978 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.566053 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.567924 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5b21496-4576-4872-9a7b-a0fa475466a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.583481 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.588223 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdbt6\" (UniqueName: \"kubernetes.io/projected/e5b21496-4576-4872-9a7b-a0fa475466a6-kube-api-access-bdbt6\") pod \"ovsdbserver-nb-0\" (UID: \"e5b21496-4576-4872-9a7b-a0fa475466a6\") " pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:54 crc kubenswrapper[4888]: I1006 15:15:54.599629 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.799837 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.801592 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.804101 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.804757 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.805188 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pnhwz" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.806170 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.816928 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.917637 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj9mn\" (UniqueName: \"kubernetes.io/projected/74ff743d-532c-4a3a-bf4a-967b9edca039-kube-api-access-xj9mn\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.917954 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74ff743d-532c-4a3a-bf4a-967b9edca039-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.917980 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74ff743d-532c-4a3a-bf4a-967b9edca039-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.918227 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.918308 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.918374 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.918448 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ff743d-532c-4a3a-bf4a-967b9edca039-config\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:57 crc kubenswrapper[4888]: I1006 15:15:57.918539 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.019923 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj9mn\" (UniqueName: \"kubernetes.io/projected/74ff743d-532c-4a3a-bf4a-967b9edca039-kube-api-access-xj9mn\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.019964 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74ff743d-532c-4a3a-bf4a-967b9edca039-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.019985 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74ff743d-532c-4a3a-bf4a-967b9edca039-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.020050 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.020087 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.020124 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.020147 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ff743d-532c-4a3a-bf4a-967b9edca039-config\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.020183 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.020756 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74ff743d-532c-4a3a-bf4a-967b9edca039-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.021162 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.021412 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74ff743d-532c-4a3a-bf4a-967b9edca039-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.022314 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74ff743d-532c-4a3a-bf4a-967b9edca039-config\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.026556 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.029500 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.033643 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74ff743d-532c-4a3a-bf4a-967b9edca039-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.039045 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj9mn\" (UniqueName: \"kubernetes.io/projected/74ff743d-532c-4a3a-bf4a-967b9edca039-kube-api-access-xj9mn\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.046330 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"74ff743d-532c-4a3a-bf4a-967b9edca039\") " pod="openstack/ovsdbserver-sb-0" Oct 06 15:15:58 crc kubenswrapper[4888]: I1006 15:15:58.138171 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 06 15:16:02 crc kubenswrapper[4888]: I1006 15:16:02.564147 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:16:02 crc kubenswrapper[4888]: I1006 15:16:02.564736 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:16:12 crc kubenswrapper[4888]: E1006 15:16:12.964871 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Oct 06 15:16:12 crc kubenswrapper[4888]: E1006 15:16:12.966832 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(91ed3909-71e7-40e7-9e97-e9917d621080): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:16:12 crc kubenswrapper[4888]: E1006 15:16:12.969113 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.589516 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.791864 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.792025 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6l9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-knjf2_openstack(57bda89f-ca60-4250-90f4-b6bff2713ac0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.793232 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" podUID="57bda89f-ca60-4250-90f4-b6bff2713ac0" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.842073 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.842216 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrk84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-ss4h7_openstack(ac5e3688-6436-419f-9267-a341ff87652f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.843601 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" podUID="ac5e3688-6436-419f-9267-a341ff87652f" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.922669 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.923029 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8dq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mwk2g_openstack(1937aa19-8288-41a0-98b5-049842d7ee4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:16:13 crc kubenswrapper[4888]: E1006 15:16:13.924725 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" podUID="1937aa19-8288-41a0-98b5-049842d7ee4f" Oct 06 15:16:14 crc kubenswrapper[4888]: E1006 15:16:14.009259 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 06 15:16:14 crc kubenswrapper[4888]: E1006 15:16:14.009399 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2skd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-l7jvm_openstack(772313fa-9fed-485d-9a5e-8e4b877d0508): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:16:14 crc kubenswrapper[4888]: E1006 15:16:14.011003 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" podUID="772313fa-9fed-485d-9a5e-8e4b877d0508" Oct 06 15:16:14 crc kubenswrapper[4888]: I1006 15:16:14.428687 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nzvpg"] Oct 06 15:16:14 crc kubenswrapper[4888]: I1006 15:16:14.544901 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 15:16:14 crc kubenswrapper[4888]: W1006 15:16:14.560669 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d44f4c2_c7ba_4bb2_b2e2_16fafc256ea5.slice/crio-c2cfa3b1e119fbd4d3598dd7716e03ddcda3827b91e1b021e0e2d6782b89fe0f WatchSource:0}: Error finding container c2cfa3b1e119fbd4d3598dd7716e03ddcda3827b91e1b021e0e2d6782b89fe0f: Status 404 returned error can't find the container with id c2cfa3b1e119fbd4d3598dd7716e03ddcda3827b91e1b021e0e2d6782b89fe0f Oct 06 15:16:14 crc kubenswrapper[4888]: I1006 15:16:14.577652 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:16:14 crc kubenswrapper[4888]: I1006 15:16:14.584826 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 06 15:16:14 crc kubenswrapper[4888]: W1006 15:16:14.589237 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6265e0c_c180_4f1b_9d3b_73321ed1caf5.slice/crio-9585c2b917805d17bca570110ca18e6fbdea3323f889e9fbb31cde30f40e9e51 WatchSource:0}: Error finding container 9585c2b917805d17bca570110ca18e6fbdea3323f889e9fbb31cde30f40e9e51: Status 404 returned error can't find the container with id 9585c2b917805d17bca570110ca18e6fbdea3323f889e9fbb31cde30f40e9e51 Oct 06 15:16:14 crc kubenswrapper[4888]: I1006 15:16:14.599315 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6a9289a6-f236-4e76-ac2b-4ef38163f845","Type":"ContainerStarted","Data":"a1d104ac1333ab12e8cfcd11d609f1c8a196c5ef8f75bb089719b727abd5ebee"} Oct 06 15:16:14 crc kubenswrapper[4888]: I1006 15:16:14.600384 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5","Type":"ContainerStarted","Data":"c2cfa3b1e119fbd4d3598dd7716e03ddcda3827b91e1b021e0e2d6782b89fe0f"} Oct 06 15:16:14 crc kubenswrapper[4888]: I1006 15:16:14.602085 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nzvpg" event={"ID":"82705879-10de-4927-946c-c55766069d1b","Type":"ContainerStarted","Data":"ddb20f11835d733f83138207339998e5fd40c23c2926d32ba76860c156cfcf33"} Oct 06 15:16:14 crc kubenswrapper[4888]: E1006 15:16:14.604577 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" podUID="ac5e3688-6436-419f-9267-a341ff87652f" Oct 06 15:16:14 crc kubenswrapper[4888]: E1006 15:16:14.604857 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" podUID="772313fa-9fed-485d-9a5e-8e4b877d0508" Oct 06 15:16:14 crc kubenswrapper[4888]: I1006 15:16:14.748455 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 06 15:16:14 crc kubenswrapper[4888]: W1006 15:16:14.749104 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49e0f5f3_c656_4e89_a5d6_73443af4afc4.slice/crio-08f9bce678f47067d06b821b34267c388434e069cab008d2abfd07e3806e49be WatchSource:0}: Error finding container 08f9bce678f47067d06b821b34267c388434e069cab008d2abfd07e3806e49be: Status 404 returned error can't find the container with id 08f9bce678f47067d06b821b34267c388434e069cab008d2abfd07e3806e49be Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.004325 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.097634 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.155441 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bda89f-ca60-4250-90f4-b6bff2713ac0-config\") pod \"57bda89f-ca60-4250-90f4-b6bff2713ac0\" (UID: \"57bda89f-ca60-4250-90f4-b6bff2713ac0\") " Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.155557 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6l9l\" (UniqueName: \"kubernetes.io/projected/57bda89f-ca60-4250-90f4-b6bff2713ac0-kube-api-access-n6l9l\") pod \"57bda89f-ca60-4250-90f4-b6bff2713ac0\" (UID: \"57bda89f-ca60-4250-90f4-b6bff2713ac0\") " Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.156498 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57bda89f-ca60-4250-90f4-b6bff2713ac0-config" (OuterVolumeSpecName: "config") pod "57bda89f-ca60-4250-90f4-b6bff2713ac0" (UID: "57bda89f-ca60-4250-90f4-b6bff2713ac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.162674 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bda89f-ca60-4250-90f4-b6bff2713ac0-kube-api-access-n6l9l" (OuterVolumeSpecName: "kube-api-access-n6l9l") pod "57bda89f-ca60-4250-90f4-b6bff2713ac0" (UID: "57bda89f-ca60-4250-90f4-b6bff2713ac0"). InnerVolumeSpecName "kube-api-access-n6l9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.257514 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-dns-svc\") pod \"1937aa19-8288-41a0-98b5-049842d7ee4f\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.257586 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-config\") pod \"1937aa19-8288-41a0-98b5-049842d7ee4f\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.257607 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8dq5\" (UniqueName: \"kubernetes.io/projected/1937aa19-8288-41a0-98b5-049842d7ee4f-kube-api-access-m8dq5\") pod \"1937aa19-8288-41a0-98b5-049842d7ee4f\" (UID: \"1937aa19-8288-41a0-98b5-049842d7ee4f\") " Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.258074 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6l9l\" (UniqueName: \"kubernetes.io/projected/57bda89f-ca60-4250-90f4-b6bff2713ac0-kube-api-access-n6l9l\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.258095 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bda89f-ca60-4250-90f4-b6bff2713ac0-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.258155 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1937aa19-8288-41a0-98b5-049842d7ee4f" (UID: "1937aa19-8288-41a0-98b5-049842d7ee4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.258167 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-config" (OuterVolumeSpecName: "config") pod "1937aa19-8288-41a0-98b5-049842d7ee4f" (UID: "1937aa19-8288-41a0-98b5-049842d7ee4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.260572 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1937aa19-8288-41a0-98b5-049842d7ee4f-kube-api-access-m8dq5" (OuterVolumeSpecName: "kube-api-access-m8dq5") pod "1937aa19-8288-41a0-98b5-049842d7ee4f" (UID: "1937aa19-8288-41a0-98b5-049842d7ee4f"). InnerVolumeSpecName "kube-api-access-m8dq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.359180 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.359210 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1937aa19-8288-41a0-98b5-049842d7ee4f-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.359221 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8dq5\" (UniqueName: \"kubernetes.io/projected/1937aa19-8288-41a0-98b5-049842d7ee4f-kube-api-access-m8dq5\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.617875 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44ccc0c-19ed-4959-ac2c-46842cd27fc1","Type":"ContainerStarted","Data":"bb989701414b929a31612dd68136f8326ca10cd6168097c538102b30298af31e"} Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.623669 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b6265e0c-c180-4f1b-9d3b-73321ed1caf5","Type":"ContainerStarted","Data":"9585c2b917805d17bca570110ca18e6fbdea3323f889e9fbb31cde30f40e9e51"} Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.637901 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"49e0f5f3-c656-4e89-a5d6-73443af4afc4","Type":"ContainerStarted","Data":"08f9bce678f47067d06b821b34267c388434e069cab008d2abfd07e3806e49be"} Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.639583 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" event={"ID":"57bda89f-ca60-4250-90f4-b6bff2713ac0","Type":"ContainerDied","Data":"178f9283314fe4fc096032a3eacc33b26298456414c71f719298196a9761a5e4"} Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.639675 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-knjf2" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.653871 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" event={"ID":"1937aa19-8288-41a0-98b5-049842d7ee4f","Type":"ContainerDied","Data":"130a66b12f58eef882a7970144ebcb52b4470cec2d3b5a2dd36ba2bcb3aa5e66"} Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.653915 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mwk2g" Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.679318 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mczwz"] Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.718484 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-knjf2"] Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.724061 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-knjf2"] Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.762177 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mwk2g"] Oct 06 15:16:15 crc kubenswrapper[4888]: I1006 15:16:15.767174 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mwk2g"] Oct 06 15:16:16 crc kubenswrapper[4888]: I1006 15:16:16.438200 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 15:16:16 crc kubenswrapper[4888]: I1006 15:16:16.541661 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 15:16:16 crc kubenswrapper[4888]: W1006 15:16:16.593050 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5b21496_4576_4872_9a7b_a0fa475466a6.slice/crio-99762bddadf290ff004cb90d76e4cce5f71816405cc00c2e6daa72a70e3d268c WatchSource:0}: Error finding container 99762bddadf290ff004cb90d76e4cce5f71816405cc00c2e6daa72a70e3d268c: Status 404 returned error can't find the container with id 99762bddadf290ff004cb90d76e4cce5f71816405cc00c2e6daa72a70e3d268c Oct 06 15:16:16 crc kubenswrapper[4888]: I1006 15:16:16.663761 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e5b21496-4576-4872-9a7b-a0fa475466a6","Type":"ContainerStarted","Data":"99762bddadf290ff004cb90d76e4cce5f71816405cc00c2e6daa72a70e3d268c"} Oct 06 15:16:16 crc kubenswrapper[4888]: I1006 15:16:16.665938 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mczwz" event={"ID":"737bd423-e5c2-4a4e-9463-56e1bb95b101","Type":"ContainerStarted","Data":"8433ca47907e27e0d4a6c68fa54140e07bfbfcb3a31ffe8372361c1c28b46aff"} Oct 06 15:16:16 crc kubenswrapper[4888]: I1006 15:16:16.934171 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1937aa19-8288-41a0-98b5-049842d7ee4f" path="/var/lib/kubelet/pods/1937aa19-8288-41a0-98b5-049842d7ee4f/volumes" Oct 06 15:16:16 crc kubenswrapper[4888]: I1006 15:16:16.934658 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57bda89f-ca60-4250-90f4-b6bff2713ac0" path="/var/lib/kubelet/pods/57bda89f-ca60-4250-90f4-b6bff2713ac0/volumes" Oct 06 15:16:17 crc kubenswrapper[4888]: I1006 15:16:17.676131 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74ff743d-532c-4a3a-bf4a-967b9edca039","Type":"ContainerStarted","Data":"6f67683463fd6a4cf40bdf35138ece6dfa4c3e7b15e86dd3a7c74002d49a8b72"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.732114 4888 generic.go:334] "Generic (PLEG): container finished" podID="737bd423-e5c2-4a4e-9463-56e1bb95b101" containerID="0bc0b487fcecf4b02cf029921f4fab70ab0d6e80e193f2cf3e049e8e025034a1" exitCode=0 Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.732215 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mczwz" event={"ID":"737bd423-e5c2-4a4e-9463-56e1bb95b101","Type":"ContainerDied","Data":"0bc0b487fcecf4b02cf029921f4fab70ab0d6e80e193f2cf3e049e8e025034a1"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.735817 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6a9289a6-f236-4e76-ac2b-4ef38163f845","Type":"ContainerStarted","Data":"518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.735981 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.739995 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5","Type":"ContainerStarted","Data":"2e4ddad29859193078f07dde8129d5177c02e6624d066e18cda852e55a3c1e14"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.742528 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b6265e0c-c180-4f1b-9d3b-73321ed1caf5","Type":"ContainerStarted","Data":"1339542434ab603f78ce56d1c36fb905b9ddc3208882bca475bd7a5373976ecc"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.745328 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"49e0f5f3-c656-4e89-a5d6-73443af4afc4","Type":"ContainerStarted","Data":"5be3f1f9f64634ff6f11d07f79f1ca89327db00f6f44a69532cbbcb02c1d7488"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.745992 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.755245 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nzvpg" event={"ID":"82705879-10de-4927-946c-c55766069d1b","Type":"ContainerStarted","Data":"2df303bc98abc0f799cda5ae4888a5de9bd1c6c923b4a85f3465359d3957b8e3"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.755496 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nzvpg" Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.758678 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74ff743d-532c-4a3a-bf4a-967b9edca039","Type":"ContainerStarted","Data":"783a5d8290f1be6aea281c7cf56e963fd9bc2db9eddaac2f7cd680b5e5de9feb"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.760988 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e5b21496-4576-4872-9a7b-a0fa475466a6","Type":"ContainerStarted","Data":"ee267e8346d1002fe38e1ddbd24010399227ad7f52a6b478bd82e18429b47cfb"} Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.774006 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.628522931 podStartE2EDuration="33.773981981s" podCreationTimestamp="2025-10-06 15:15:50 +0000 UTC" firstStartedPulling="2025-10-06 15:16:14.586209803 +0000 UTC m=+914.398560511" lastFinishedPulling="2025-10-06 15:16:22.731668843 +0000 UTC m=+922.544019561" observedRunningTime="2025-10-06 15:16:23.766449363 +0000 UTC m=+923.578800081" watchObservedRunningTime="2025-10-06 15:16:23.773981981 +0000 UTC m=+923.586332709" Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.846870 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.700203744 podStartE2EDuration="35.846850276s" podCreationTimestamp="2025-10-06 15:15:48 +0000 UTC" firstStartedPulling="2025-10-06 15:16:14.750818578 +0000 UTC m=+914.563169296" lastFinishedPulling="2025-10-06 15:16:21.89746511 +0000 UTC m=+921.709815828" observedRunningTime="2025-10-06 15:16:23.844148051 +0000 UTC m=+923.656498769" watchObservedRunningTime="2025-10-06 15:16:23.846850276 +0000 UTC m=+923.659201004" Oct 06 15:16:23 crc kubenswrapper[4888]: I1006 15:16:23.871753 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nzvpg" podStartSLOduration=23.720797505 podStartE2EDuration="31.871736049s" podCreationTimestamp="2025-10-06 15:15:52 +0000 UTC" firstStartedPulling="2025-10-06 15:16:14.437733736 +0000 UTC m=+914.250084464" lastFinishedPulling="2025-10-06 15:16:22.58867229 +0000 UTC m=+922.401023008" observedRunningTime="2025-10-06 15:16:23.867479836 +0000 UTC m=+923.679830564" watchObservedRunningTime="2025-10-06 15:16:23.871736049 +0000 UTC m=+923.684086767" Oct 06 15:16:24 crc kubenswrapper[4888]: I1006 15:16:24.770573 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mczwz" event={"ID":"737bd423-e5c2-4a4e-9463-56e1bb95b101","Type":"ContainerStarted","Data":"7c0cc5230fdfde80d80653e0b4f90c861393e2397163772ada0d74430537611b"} Oct 06 15:16:24 crc kubenswrapper[4888]: I1006 15:16:24.770941 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mczwz" event={"ID":"737bd423-e5c2-4a4e-9463-56e1bb95b101","Type":"ContainerStarted","Data":"4a0682a9e80652e95bbfa2f619a9eeebfaaa718426c9cc040b2a64f5decd2f40"} Oct 06 15:16:25 crc kubenswrapper[4888]: I1006 15:16:25.777014 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:16:25 crc kubenswrapper[4888]: I1006 15:16:25.777380 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:16:26 crc kubenswrapper[4888]: I1006 15:16:26.793526 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74ff743d-532c-4a3a-bf4a-967b9edca039","Type":"ContainerStarted","Data":"709fca88be66d866efbd7ebcc1781cb70888b42e72ac337e6f43fad334cd22cd"} Oct 06 15:16:26 crc kubenswrapper[4888]: I1006 15:16:26.797343 4888 generic.go:334] "Generic (PLEG): container finished" podID="3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5" containerID="2e4ddad29859193078f07dde8129d5177c02e6624d066e18cda852e55a3c1e14" exitCode=0 Oct 06 15:16:26 crc kubenswrapper[4888]: I1006 15:16:26.797412 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5","Type":"ContainerDied","Data":"2e4ddad29859193078f07dde8129d5177c02e6624d066e18cda852e55a3c1e14"} Oct 06 15:16:26 crc kubenswrapper[4888]: I1006 15:16:26.804526 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b6265e0c-c180-4f1b-9d3b-73321ed1caf5","Type":"ContainerDied","Data":"1339542434ab603f78ce56d1c36fb905b9ddc3208882bca475bd7a5373976ecc"} Oct 06 15:16:26 crc kubenswrapper[4888]: I1006 15:16:26.804153 4888 generic.go:334] "Generic (PLEG): container finished" podID="b6265e0c-c180-4f1b-9d3b-73321ed1caf5" containerID="1339542434ab603f78ce56d1c36fb905b9ddc3208882bca475bd7a5373976ecc" exitCode=0 Oct 06 15:16:26 crc kubenswrapper[4888]: I1006 15:16:26.831534 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mczwz" podStartSLOduration=28.634808403 podStartE2EDuration="34.831516908s" podCreationTimestamp="2025-10-06 15:15:52 +0000 UTC" firstStartedPulling="2025-10-06 15:16:15.790200832 +0000 UTC m=+915.602551550" lastFinishedPulling="2025-10-06 15:16:21.986909337 +0000 UTC m=+921.799260055" observedRunningTime="2025-10-06 15:16:24.796284209 +0000 UTC m=+924.608634937" watchObservedRunningTime="2025-10-06 15:16:26.831516908 +0000 UTC m=+926.643867626" Oct 06 15:16:28 crc kubenswrapper[4888]: I1006 15:16:28.621335 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 06 15:16:28 crc kubenswrapper[4888]: I1006 15:16:28.819601 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e5b21496-4576-4872-9a7b-a0fa475466a6","Type":"ContainerStarted","Data":"9656edece684c950d8fff80c20fb5e028a9953054b55fb4207b58fc551b6c808"} Oct 06 15:16:29 crc kubenswrapper[4888]: I1006 15:16:29.827021 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ed3909-71e7-40e7-9e97-e9917d621080","Type":"ContainerStarted","Data":"b5a72bf36651f8d766603c0c96d06ec8573e06712ce58396c5afbfef3f771a97"} Oct 06 15:16:29 crc kubenswrapper[4888]: I1006 15:16:29.830657 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5","Type":"ContainerStarted","Data":"b9e2e86155f18a6273700fb85f50a4b6ed96e44293b8011152bab743b64548bf"} Oct 06 15:16:29 crc kubenswrapper[4888]: I1006 15:16:29.834597 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b6265e0c-c180-4f1b-9d3b-73321ed1caf5","Type":"ContainerStarted","Data":"cd643a26aa8516a0dfba3fb243518e553fbe6bfe41684378ea6c91bbdcb73b48"} Oct 06 15:16:29 crc kubenswrapper[4888]: I1006 15:16:29.887331 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.315466785 podStartE2EDuration="36.887309719s" podCreationTimestamp="2025-10-06 15:15:53 +0000 UTC" firstStartedPulling="2025-10-06 15:16:16.603034583 +0000 UTC m=+916.415385301" lastFinishedPulling="2025-10-06 15:16:26.174877517 +0000 UTC m=+925.987228235" observedRunningTime="2025-10-06 15:16:29.880515066 +0000 UTC m=+929.692865784" watchObservedRunningTime="2025-10-06 15:16:29.887309719 +0000 UTC m=+929.699660437" Oct 06 15:16:29 crc kubenswrapper[4888]: I1006 15:16:29.949723 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=35.924339726 podStartE2EDuration="43.949704394s" podCreationTimestamp="2025-10-06 15:15:46 +0000 UTC" firstStartedPulling="2025-10-06 15:16:14.563306251 +0000 UTC m=+914.375656969" lastFinishedPulling="2025-10-06 15:16:22.588670919 +0000 UTC m=+922.401021637" observedRunningTime="2025-10-06 15:16:29.948905579 +0000 UTC m=+929.761256307" watchObservedRunningTime="2025-10-06 15:16:29.949704394 +0000 UTC m=+929.762055112" Oct 06 15:16:29 crc kubenswrapper[4888]: I1006 15:16:29.954153 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=24.542446333 podStartE2EDuration="33.954132734s" podCreationTimestamp="2025-10-06 15:15:56 +0000 UTC" firstStartedPulling="2025-10-06 15:16:16.776613259 +0000 UTC m=+916.588963977" lastFinishedPulling="2025-10-06 15:16:26.18829966 +0000 UTC m=+926.000650378" observedRunningTime="2025-10-06 15:16:29.916183458 +0000 UTC m=+929.728534176" watchObservedRunningTime="2025-10-06 15:16:29.954132734 +0000 UTC m=+929.766483452" Oct 06 15:16:29 crc kubenswrapper[4888]: I1006 15:16:29.978124 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=35.947094263 podStartE2EDuration="43.97810887s" podCreationTimestamp="2025-10-06 15:15:46 +0000 UTC" firstStartedPulling="2025-10-06 15:16:14.5931183 +0000 UTC m=+914.405469018" lastFinishedPulling="2025-10-06 15:16:22.624132907 +0000 UTC m=+922.436483625" observedRunningTime="2025-10-06 15:16:29.973421001 +0000 UTC m=+929.785771719" watchObservedRunningTime="2025-10-06 15:16:29.97810887 +0000 UTC m=+929.790459578" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.482069 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.486397 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ss4h7"] Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.581293 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-dm8zj"] Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.582727 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.600750 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.603028 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-dm8zj"] Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.634073 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.634163 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-config\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.634210 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4p2\" (UniqueName: \"kubernetes.io/projected/8179e4f4-0560-4807-908e-87c50cbf9a55-kube-api-access-st4p2\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.735526 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.735599 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-config\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.735626 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st4p2\" (UniqueName: \"kubernetes.io/projected/8179e4f4-0560-4807-908e-87c50cbf9a55-kube-api-access-st4p2\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.736788 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.737338 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-config\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.758424 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.767404 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st4p2\" (UniqueName: \"kubernetes.io/projected/8179e4f4-0560-4807-908e-87c50cbf9a55-kube-api-access-st4p2\") pod \"dnsmasq-dns-7cb5889db5-dm8zj\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.843977 4888 generic.go:334] "Generic (PLEG): container finished" podID="ac5e3688-6436-419f-9267-a341ff87652f" containerID="b284d364235da419e6587ea7b181d2389f091cb6f64412081b239897512085b3" exitCode=0 Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.844101 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" event={"ID":"ac5e3688-6436-419f-9267-a341ff87652f","Type":"ContainerDied","Data":"b284d364235da419e6587ea7b181d2389f091cb6f64412081b239897512085b3"} Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.845751 4888 generic.go:334] "Generic (PLEG): container finished" podID="772313fa-9fed-485d-9a5e-8e4b877d0508" containerID="f5f90fc876f04a9a5939400b8795d0247fb8aec0aa5e7da3c232793dbbad2e23" exitCode=0 Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.845903 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" event={"ID":"772313fa-9fed-485d-9a5e-8e4b877d0508","Type":"ContainerDied","Data":"f5f90fc876f04a9a5939400b8795d0247fb8aec0aa5e7da3c232793dbbad2e23"} Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.846044 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.897185 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 06 15:16:30 crc kubenswrapper[4888]: I1006 15:16:30.914194 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.138551 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 06 15:16:31 crc kubenswrapper[4888]: E1006 15:16:31.141946 4888 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 06 15:16:31 crc kubenswrapper[4888]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/772313fa-9fed-485d-9a5e-8e4b877d0508/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 06 15:16:31 crc kubenswrapper[4888]: > podSandboxID="9623f3206bbc33a18c8d2f6bff9e3a0d62b23299432cd2e9fd49ad9b13aa2225" Oct 06 15:16:31 crc kubenswrapper[4888]: E1006 15:16:31.142136 4888 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 06 15:16:31 crc kubenswrapper[4888]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2skd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-l7jvm_openstack(772313fa-9fed-485d-9a5e-8e4b877d0508): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/772313fa-9fed-485d-9a5e-8e4b877d0508/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 06 15:16:31 crc kubenswrapper[4888]: > logger="UnhandledError" Oct 06 15:16:31 crc kubenswrapper[4888]: E1006 15:16:31.143931 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/772313fa-9fed-485d-9a5e-8e4b877d0508/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" podUID="772313fa-9fed-485d-9a5e-8e4b877d0508" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.205115 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.212963 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-l7jvm"] Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.260604 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xhhd2"] Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.263733 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.278769 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.315172 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.318906 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xhhd2"] Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.355662 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-config\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.355709 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.355817 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2swb\" (UniqueName: \"kubernetes.io/projected/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-kube-api-access-p2swb\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.355861 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.429992 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-89sjh"] Oct 06 15:16:31 crc kubenswrapper[4888]: E1006 15:16:31.432745 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5e3688-6436-419f-9267-a341ff87652f" containerName="init" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.433053 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5e3688-6436-419f-9267-a341ff87652f" containerName="init" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.433408 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5e3688-6436-419f-9267-a341ff87652f" containerName="init" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.436532 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.440897 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.456778 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-config\") pod \"ac5e3688-6436-419f-9267-a341ff87652f\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.456922 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrk84\" (UniqueName: \"kubernetes.io/projected/ac5e3688-6436-419f-9267-a341ff87652f-kube-api-access-mrk84\") pod \"ac5e3688-6436-419f-9267-a341ff87652f\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.456981 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-dns-svc\") pod \"ac5e3688-6436-419f-9267-a341ff87652f\" (UID: \"ac5e3688-6436-419f-9267-a341ff87652f\") " Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.457231 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-config\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.457274 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.457468 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2swb\" (UniqueName: \"kubernetes.io/projected/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-kube-api-access-p2swb\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.457600 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.458799 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-config\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.459359 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.459570 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.468250 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5e3688-6436-419f-9267-a341ff87652f-kube-api-access-mrk84" (OuterVolumeSpecName: "kube-api-access-mrk84") pod "ac5e3688-6436-419f-9267-a341ff87652f" (UID: "ac5e3688-6436-419f-9267-a341ff87652f"). InnerVolumeSpecName "kube-api-access-mrk84". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.472730 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-dm8zj"] Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.487234 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2swb\" (UniqueName: \"kubernetes.io/projected/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-kube-api-access-p2swb\") pod \"dnsmasq-dns-74f6f696b9-xhhd2\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.492404 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-89sjh"] Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.499155 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-config" (OuterVolumeSpecName: "config") pod "ac5e3688-6436-419f-9267-a341ff87652f" (UID: "ac5e3688-6436-419f-9267-a341ff87652f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.512556 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac5e3688-6436-419f-9267-a341ff87652f" (UID: "ac5e3688-6436-419f-9267-a341ff87652f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.558837 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8106dadc-62e5-4f81-9e15-74c474f1c111-ovs-rundir\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.559341 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8106dadc-62e5-4f81-9e15-74c474f1c111-combined-ca-bundle\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.559390 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8106dadc-62e5-4f81-9e15-74c474f1c111-config\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.559432 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8106dadc-62e5-4f81-9e15-74c474f1c111-ovn-rundir\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.559497 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55g4f\" (UniqueName: \"kubernetes.io/projected/8106dadc-62e5-4f81-9e15-74c474f1c111-kube-api-access-55g4f\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.559605 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8106dadc-62e5-4f81-9e15-74c474f1c111-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.559681 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrk84\" (UniqueName: \"kubernetes.io/projected/ac5e3688-6436-419f-9267-a341ff87652f-kube-api-access-mrk84\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.559706 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.559719 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac5e3688-6436-419f-9267-a341ff87652f-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.661170 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8106dadc-62e5-4f81-9e15-74c474f1c111-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.661224 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8106dadc-62e5-4f81-9e15-74c474f1c111-ovs-rundir\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.661248 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8106dadc-62e5-4f81-9e15-74c474f1c111-combined-ca-bundle\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.661583 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8106dadc-62e5-4f81-9e15-74c474f1c111-ovs-rundir\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.661924 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8106dadc-62e5-4f81-9e15-74c474f1c111-config\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.661967 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8106dadc-62e5-4f81-9e15-74c474f1c111-ovn-rundir\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.662004 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55g4f\" (UniqueName: \"kubernetes.io/projected/8106dadc-62e5-4f81-9e15-74c474f1c111-kube-api-access-55g4f\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.662232 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8106dadc-62e5-4f81-9e15-74c474f1c111-config\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.662312 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8106dadc-62e5-4f81-9e15-74c474f1c111-ovn-rundir\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.666237 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8106dadc-62e5-4f81-9e15-74c474f1c111-combined-ca-bundle\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.666420 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8106dadc-62e5-4f81-9e15-74c474f1c111-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.679152 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.681494 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55g4f\" (UniqueName: \"kubernetes.io/projected/8106dadc-62e5-4f81-9e15-74c474f1c111-kube-api-access-55g4f\") pod \"ovn-controller-metrics-89sjh\" (UID: \"8106dadc-62e5-4f81-9e15-74c474f1c111\") " pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.707789 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.722811 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.734818 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.740300 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-v4z2k" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.740492 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.741485 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.754321 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-89sjh" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.773301 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.864583 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/277682ba-0d72-43d5-b52c-59f6b02b2963-cache\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.871017 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.871266 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.871390 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwth\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-kube-api-access-9dwth\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.871434 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/277682ba-0d72-43d5-b52c-59f6b02b2963-lock\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.930329 4888 generic.go:334] "Generic (PLEG): container finished" podID="8179e4f4-0560-4807-908e-87c50cbf9a55" containerID="bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13" exitCode=0 Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.930490 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" event={"ID":"8179e4f4-0560-4807-908e-87c50cbf9a55","Type":"ContainerDied","Data":"bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13"} Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.930525 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" event={"ID":"8179e4f4-0560-4807-908e-87c50cbf9a55","Type":"ContainerStarted","Data":"e0228d2fe8d431d588d80c7b2050d844853873b9e0b7989fae6eb97869eec4f1"} Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.969100 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-dm8zj"] Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.983700 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.983779 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwth\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-kube-api-access-9dwth\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.983825 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/277682ba-0d72-43d5-b52c-59f6b02b2963-lock\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.983879 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/277682ba-0d72-43d5-b52c-59f6b02b2963-cache\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.983948 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:31 crc kubenswrapper[4888]: I1006 15:16:31.984392 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.004775 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.006505 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/277682ba-0d72-43d5-b52c-59f6b02b2963-lock\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:32 crc kubenswrapper[4888]: E1006 15:16:32.006653 4888 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 15:16:32 crc kubenswrapper[4888]: E1006 15:16:32.006670 4888 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 15:16:32 crc kubenswrapper[4888]: E1006 15:16:32.006717 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift podName:277682ba-0d72-43d5-b52c-59f6b02b2963 nodeName:}" failed. No retries permitted until 2025-10-06 15:16:32.506699179 +0000 UTC m=+932.319049897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift") pod "swift-storage-0" (UID: "277682ba-0d72-43d5-b52c-59f6b02b2963") : configmap "swift-ring-files" not found Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.008013 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ss4h7" event={"ID":"ac5e3688-6436-419f-9267-a341ff87652f","Type":"ContainerDied","Data":"380c7b64e1ab6e25f28302dc26d9886ec0665480a8f2d105937f8cbcf9ed54b2"} Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.008052 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.008077 4888 scope.go:117] "RemoveContainer" containerID="b284d364235da419e6587ea7b181d2389f091cb6f64412081b239897512085b3" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.008303 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/277682ba-0d72-43d5-b52c-59f6b02b2963-cache\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.075931 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-tbcr2"] Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.092476 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwth\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-kube-api-access-9dwth\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.092988 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.103120 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.114396 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.155862 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tbcr2"] Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.292894 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.294657 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-dns-svc\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.295190 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmhzs\" (UniqueName: \"kubernetes.io/projected/f107030a-25fb-4025-a397-54c4e90b3a60-kube-api-access-bmhzs\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.295287 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.295575 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-config\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.361746 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xhhd2"] Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.405013 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.405818 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-config\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.405884 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.405926 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-dns-svc\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.405995 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhzs\" (UniqueName: \"kubernetes.io/projected/f107030a-25fb-4025-a397-54c4e90b3a60-kube-api-access-bmhzs\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.406032 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.406623 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-config\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.406881 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.410776 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.413278 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-dns-svc\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.416229 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ss4h7"] Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.430087 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ss4h7"] Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.471684 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmhzs\" (UniqueName: \"kubernetes.io/projected/f107030a-25fb-4025-a397-54c4e90b3a60-kube-api-access-bmhzs\") pod \"dnsmasq-dns-698758b865-tbcr2\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: W1006 15:16:32.481974 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6aa7e91_cf5b_4d19_8047_a19c1f6195e7.slice/crio-cdc8450157323d76c38bf987db918712fe2da0a1ce93e57ca7a4573b6ba3d5f7 WatchSource:0}: Error finding container cdc8450157323d76c38bf987db918712fe2da0a1ce93e57ca7a4573b6ba3d5f7: Status 404 returned error can't find the container with id cdc8450157323d76c38bf987db918712fe2da0a1ce93e57ca7a4573b6ba3d5f7 Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.507217 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:32 crc kubenswrapper[4888]: E1006 15:16:32.507431 4888 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 15:16:32 crc kubenswrapper[4888]: E1006 15:16:32.507453 4888 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 15:16:32 crc kubenswrapper[4888]: E1006 15:16:32.507694 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift podName:277682ba-0d72-43d5-b52c-59f6b02b2963 nodeName:}" failed. No retries permitted until 2025-10-06 15:16:33.507676768 +0000 UTC m=+933.320027486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift") pod "swift-storage-0" (UID: "277682ba-0d72-43d5-b52c-59f6b02b2963") : configmap "swift-ring-files" not found Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.566449 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.566501 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.566562 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.567420 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7f872a375e0d5fa3a0376b8ecf93b05be1a27ff35604df3b986a455e732259f"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.567493 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://c7f872a375e0d5fa3a0376b8ecf93b05be1a27ff35604df3b986a455e732259f" gracePeriod=600 Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.662270 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.725360 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.749537 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-89sjh"] Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.776869 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 06 15:16:32 crc kubenswrapper[4888]: E1006 15:16:32.777338 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772313fa-9fed-485d-9a5e-8e4b877d0508" containerName="init" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.777354 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="772313fa-9fed-485d-9a5e-8e4b877d0508" containerName="init" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.777547 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="772313fa-9fed-485d-9a5e-8e4b877d0508" containerName="init" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.855242 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.860785 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.861113 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zz9b6" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.875605 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.875947 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.928032 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-dns-svc\") pod \"772313fa-9fed-485d-9a5e-8e4b877d0508\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.928748 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-config\") pod \"772313fa-9fed-485d-9a5e-8e4b877d0508\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.928821 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2skd\" (UniqueName: \"kubernetes.io/projected/772313fa-9fed-485d-9a5e-8e4b877d0508-kube-api-access-z2skd\") pod \"772313fa-9fed-485d-9a5e-8e4b877d0508\" (UID: \"772313fa-9fed-485d-9a5e-8e4b877d0508\") " Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.929521 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26b90491-f5c9-42fd-b6a0-f21d2771566a-scripts\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.929578 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqq9\" (UniqueName: \"kubernetes.io/projected/26b90491-f5c9-42fd-b6a0-f21d2771566a-kube-api-access-clqq9\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.929636 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.929667 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b90491-f5c9-42fd-b6a0-f21d2771566a-config\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.929723 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.929762 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26b90491-f5c9-42fd-b6a0-f21d2771566a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.929877 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.936049 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772313fa-9fed-485d-9a5e-8e4b877d0508-kube-api-access-z2skd" (OuterVolumeSpecName: "kube-api-access-z2skd") pod "772313fa-9fed-485d-9a5e-8e4b877d0508" (UID: "772313fa-9fed-485d-9a5e-8e4b877d0508"). InnerVolumeSpecName "kube-api-access-z2skd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:32 crc kubenswrapper[4888]: I1006 15:16:32.975001 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5e3688-6436-419f-9267-a341ff87652f" path="/var/lib/kubelet/pods/ac5e3688-6436-419f-9267-a341ff87652f/volumes" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.014293 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "772313fa-9fed-485d-9a5e-8e4b877d0508" (UID: "772313fa-9fed-485d-9a5e-8e4b877d0508"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.010412 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-config" (OuterVolumeSpecName: "config") pod "772313fa-9fed-485d-9a5e-8e4b877d0508" (UID: "772313fa-9fed-485d-9a5e-8e4b877d0508"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.046466 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26b90491-f5c9-42fd-b6a0-f21d2771566a-scripts\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.046560 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqq9\" (UniqueName: \"kubernetes.io/projected/26b90491-f5c9-42fd-b6a0-f21d2771566a-kube-api-access-clqq9\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.046683 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.047506 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b90491-f5c9-42fd-b6a0-f21d2771566a-config\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.048721 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b90491-f5c9-42fd-b6a0-f21d2771566a-config\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.049476 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.049513 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26b90491-f5c9-42fd-b6a0-f21d2771566a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.049621 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.049766 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.049777 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772313fa-9fed-485d-9a5e-8e4b877d0508-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.049790 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2skd\" (UniqueName: \"kubernetes.io/projected/772313fa-9fed-485d-9a5e-8e4b877d0508-kube-api-access-z2skd\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.055666 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.058462 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26b90491-f5c9-42fd-b6a0-f21d2771566a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.059683 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.061078 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.061617 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b90491-f5c9-42fd-b6a0-f21d2771566a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.062033 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26b90491-f5c9-42fd-b6a0-f21d2771566a-scripts\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.075757 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-l7jvm" event={"ID":"772313fa-9fed-485d-9a5e-8e4b877d0508","Type":"ContainerDied","Data":"9623f3206bbc33a18c8d2f6bff9e3a0d62b23299432cd2e9fd49ad9b13aa2225"} Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.075822 4888 scope.go:117] "RemoveContainer" containerID="f5f90fc876f04a9a5939400b8795d0247fb8aec0aa5e7da3c232793dbbad2e23" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.076040 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.077364 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqq9\" (UniqueName: \"kubernetes.io/projected/26b90491-f5c9-42fd-b6a0-f21d2771566a-kube-api-access-clqq9\") pod \"ovn-northd-0\" (UID: \"26b90491-f5c9-42fd-b6a0-f21d2771566a\") " pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.082063 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" event={"ID":"8179e4f4-0560-4807-908e-87c50cbf9a55","Type":"ContainerStarted","Data":"39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af"} Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.082349 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" podUID="8179e4f4-0560-4807-908e-87c50cbf9a55" containerName="dnsmasq-dns" containerID="cri-o://39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af" gracePeriod=10 Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.082440 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.112291 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" podStartSLOduration=3.11226954 podStartE2EDuration="3.11226954s" podCreationTimestamp="2025-10-06 15:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:16:33.110037999 +0000 UTC m=+932.922388717" watchObservedRunningTime="2025-10-06 15:16:33.11226954 +0000 UTC m=+932.924620258" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.117037 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="c7f872a375e0d5fa3a0376b8ecf93b05be1a27ff35604df3b986a455e732259f" exitCode=0 Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.117108 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"c7f872a375e0d5fa3a0376b8ecf93b05be1a27ff35604df3b986a455e732259f"} Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.117142 4888 scope.go:117] "RemoveContainer" containerID="2a27765b71e89e6df1e1c89446e393b644a2f95a6e1272b73bf5478141df6f61" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.118273 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.120672 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-89sjh" event={"ID":"8106dadc-62e5-4f81-9e15-74c474f1c111","Type":"ContainerStarted","Data":"5c7c011f98342ec03e6788a13738619c8874fecf574ef3a11cde270fff462ec1"} Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.136726 4888 generic.go:334] "Generic (PLEG): container finished" podID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" containerID="b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03" exitCode=0 Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.137477 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" event={"ID":"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7","Type":"ContainerDied","Data":"b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03"} Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.137523 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" event={"ID":"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7","Type":"ContainerStarted","Data":"cdc8450157323d76c38bf987db918712fe2da0a1ce93e57ca7a4573b6ba3d5f7"} Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.256514 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-l7jvm"] Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.266859 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-l7jvm"] Oct 06 15:16:33 crc kubenswrapper[4888]: E1006 15:16:33.390540 4888 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6aa7e91_cf5b_4d19_8047_a19c1f6195e7.slice/crio-b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6aa7e91_cf5b_4d19_8047_a19c1f6195e7.slice/crio-conmon-b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8179e4f4_0560_4807_908e_87c50cbf9a55.slice/crio-conmon-39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod772313fa_9fed_485d_9a5e_8e4b877d0508.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8179e4f4_0560_4807_908e_87c50cbf9a55.slice/crio-39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af.scope\": RecentStats: unable to find data in memory cache]" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.438999 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tbcr2"] Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.571072 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:33 crc kubenswrapper[4888]: E1006 15:16:33.571592 4888 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 15:16:33 crc kubenswrapper[4888]: E1006 15:16:33.571612 4888 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 15:16:33 crc kubenswrapper[4888]: E1006 15:16:33.571661 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift podName:277682ba-0d72-43d5-b52c-59f6b02b2963 nodeName:}" failed. No retries permitted until 2025-10-06 15:16:35.571643348 +0000 UTC m=+935.383994066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift") pod "swift-storage-0" (UID: "277682ba-0d72-43d5-b52c-59f6b02b2963") : configmap "swift-ring-files" not found Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.711567 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.817418 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.976693 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-config\") pod \"8179e4f4-0560-4807-908e-87c50cbf9a55\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.977888 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-dns-svc\") pod \"8179e4f4-0560-4807-908e-87c50cbf9a55\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.978556 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st4p2\" (UniqueName: \"kubernetes.io/projected/8179e4f4-0560-4807-908e-87c50cbf9a55-kube-api-access-st4p2\") pod \"8179e4f4-0560-4807-908e-87c50cbf9a55\" (UID: \"8179e4f4-0560-4807-908e-87c50cbf9a55\") " Oct 06 15:16:33 crc kubenswrapper[4888]: I1006 15:16:33.999442 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8179e4f4-0560-4807-908e-87c50cbf9a55-kube-api-access-st4p2" (OuterVolumeSpecName: "kube-api-access-st4p2") pod "8179e4f4-0560-4807-908e-87c50cbf9a55" (UID: "8179e4f4-0560-4807-908e-87c50cbf9a55"). InnerVolumeSpecName "kube-api-access-st4p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.023313 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-config" (OuterVolumeSpecName: "config") pod "8179e4f4-0560-4807-908e-87c50cbf9a55" (UID: "8179e4f4-0560-4807-908e-87c50cbf9a55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.033475 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8179e4f4-0560-4807-908e-87c50cbf9a55" (UID: "8179e4f4-0560-4807-908e-87c50cbf9a55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.082061 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st4p2\" (UniqueName: \"kubernetes.io/projected/8179e4f4-0560-4807-908e-87c50cbf9a55-kube-api-access-st4p2\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.082105 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.082115 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8179e4f4-0560-4807-908e-87c50cbf9a55-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.157541 4888 generic.go:334] "Generic (PLEG): container finished" podID="8179e4f4-0560-4807-908e-87c50cbf9a55" containerID="39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af" exitCode=0 Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.157625 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" event={"ID":"8179e4f4-0560-4807-908e-87c50cbf9a55","Type":"ContainerDied","Data":"39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af"} Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.157663 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" event={"ID":"8179e4f4-0560-4807-908e-87c50cbf9a55","Type":"ContainerDied","Data":"e0228d2fe8d431d588d80c7b2050d844853873b9e0b7989fae6eb97869eec4f1"} Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.157684 4888 scope.go:117] "RemoveContainer" containerID="39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.157813 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-dm8zj" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.160696 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"2bf18ef6eff916382fcaa294f56d74e00f198381baa7886ed31f9974dc677b14"} Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.164093 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-89sjh" event={"ID":"8106dadc-62e5-4f81-9e15-74c474f1c111","Type":"ContainerStarted","Data":"3443ab028a7308641524cdf3b03f9769cf745333e5e62d1227290e645894568f"} Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.167048 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" event={"ID":"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7","Type":"ContainerStarted","Data":"95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91"} Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.168267 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.176183 4888 generic.go:334] "Generic (PLEG): container finished" podID="f107030a-25fb-4025-a397-54c4e90b3a60" containerID="49de2a23c98aaa27672e46707c5335d1407f833c9febb56ecc031faf137e3b69" exitCode=0 Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.176261 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tbcr2" event={"ID":"f107030a-25fb-4025-a397-54c4e90b3a60","Type":"ContainerDied","Data":"49de2a23c98aaa27672e46707c5335d1407f833c9febb56ecc031faf137e3b69"} Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.176295 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tbcr2" event={"ID":"f107030a-25fb-4025-a397-54c4e90b3a60","Type":"ContainerStarted","Data":"a4063be1f7b171feab37d52bcce0eeeccaabd27d4c81bb789d8f6bac1452401a"} Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.180132 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26b90491-f5c9-42fd-b6a0-f21d2771566a","Type":"ContainerStarted","Data":"a325a389c149e146326b682d725bbcd77b2b82927060c1323cd1c1866bf28e4d"} Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.192411 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-89sjh" podStartSLOduration=3.192384557 podStartE2EDuration="3.192384557s" podCreationTimestamp="2025-10-06 15:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:16:34.18739078 +0000 UTC m=+933.999741508" watchObservedRunningTime="2025-10-06 15:16:34.192384557 +0000 UTC m=+934.004735275" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.199699 4888 scope.go:117] "RemoveContainer" containerID="bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.263349 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" podStartSLOduration=3.263323171 podStartE2EDuration="3.263323171s" podCreationTimestamp="2025-10-06 15:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:16:34.260219064 +0000 UTC m=+934.072569782" watchObservedRunningTime="2025-10-06 15:16:34.263323171 +0000 UTC m=+934.075673899" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.305910 4888 scope.go:117] "RemoveContainer" containerID="39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af" Oct 06 15:16:34 crc kubenswrapper[4888]: E1006 15:16:34.306936 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af\": container with ID starting with 39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af not found: ID does not exist" containerID="39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.306977 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af"} err="failed to get container status \"39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af\": rpc error: code = NotFound desc = could not find container \"39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af\": container with ID starting with 39c551593b8b26884d2c3aba9480a4f56507297725bfeeb6d368d449c7d7b5af not found: ID does not exist" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.307003 4888 scope.go:117] "RemoveContainer" containerID="bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13" Oct 06 15:16:34 crc kubenswrapper[4888]: E1006 15:16:34.310257 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13\": container with ID starting with bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13 not found: ID does not exist" containerID="bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.310305 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13"} err="failed to get container status \"bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13\": rpc error: code = NotFound desc = could not find container \"bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13\": container with ID starting with bbfbf8e7f82dbf636bfbfcce34484f025c9ffd12b59646a9154465559952bb13 not found: ID does not exist" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.346276 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-dm8zj"] Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.378974 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-dm8zj"] Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.935087 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772313fa-9fed-485d-9a5e-8e4b877d0508" path="/var/lib/kubelet/pods/772313fa-9fed-485d-9a5e-8e4b877d0508/volumes" Oct 06 15:16:34 crc kubenswrapper[4888]: I1006 15:16:34.935765 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8179e4f4-0560-4807-908e-87c50cbf9a55" path="/var/lib/kubelet/pods/8179e4f4-0560-4807-908e-87c50cbf9a55/volumes" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.190451 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tbcr2" event={"ID":"f107030a-25fb-4025-a397-54c4e90b3a60","Type":"ContainerStarted","Data":"fe2f9dc8b1e5c7fe90cafaba595ece0e1647c9612250257839bcf166ad4860ea"} Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.191622 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.218218 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-tbcr2" podStartSLOduration=4.218196715 podStartE2EDuration="4.218196715s" podCreationTimestamp="2025-10-06 15:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:16:35.213433945 +0000 UTC m=+935.025784663" watchObservedRunningTime="2025-10-06 15:16:35.218196715 +0000 UTC m=+935.030547433" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.566208 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qcnvj"] Oct 06 15:16:35 crc kubenswrapper[4888]: E1006 15:16:35.567035 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8179e4f4-0560-4807-908e-87c50cbf9a55" containerName="dnsmasq-dns" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.567053 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8179e4f4-0560-4807-908e-87c50cbf9a55" containerName="dnsmasq-dns" Oct 06 15:16:35 crc kubenswrapper[4888]: E1006 15:16:35.567065 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8179e4f4-0560-4807-908e-87c50cbf9a55" containerName="init" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.567074 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8179e4f4-0560-4807-908e-87c50cbf9a55" containerName="init" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.567247 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8179e4f4-0560-4807-908e-87c50cbf9a55" containerName="dnsmasq-dns" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.567865 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.570608 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.570853 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.571126 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.586383 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qcnvj"] Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.630586 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:35 crc kubenswrapper[4888]: E1006 15:16:35.630848 4888 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 15:16:35 crc kubenswrapper[4888]: E1006 15:16:35.631370 4888 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 15:16:35 crc kubenswrapper[4888]: E1006 15:16:35.631439 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift podName:277682ba-0d72-43d5-b52c-59f6b02b2963 nodeName:}" failed. No retries permitted until 2025-10-06 15:16:39.631416908 +0000 UTC m=+939.443767626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift") pod "swift-storage-0" (UID: "277682ba-0d72-43d5-b52c-59f6b02b2963") : configmap "swift-ring-files" not found Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.733013 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-swiftconf\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.733069 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-ring-data-devices\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.733110 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-combined-ca-bundle\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.733134 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdgj7\" (UniqueName: \"kubernetes.io/projected/4aa2563c-6959-448c-9708-99f647cd24e1-kube-api-access-hdgj7\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.733160 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-scripts\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.733212 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-dispersionconf\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.733228 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4aa2563c-6959-448c-9708-99f647cd24e1-etc-swift\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.834423 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-dispersionconf\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.834471 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4aa2563c-6959-448c-9708-99f647cd24e1-etc-swift\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.834542 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-swiftconf\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.834575 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-ring-data-devices\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.834622 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-combined-ca-bundle\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.834652 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdgj7\" (UniqueName: \"kubernetes.io/projected/4aa2563c-6959-448c-9708-99f647cd24e1-kube-api-access-hdgj7\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.834681 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-scripts\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.835496 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-ring-data-devices\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.835636 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-scripts\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.835980 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4aa2563c-6959-448c-9708-99f647cd24e1-etc-swift\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.842383 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-combined-ca-bundle\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.849243 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-dispersionconf\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.851151 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-swiftconf\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.859826 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdgj7\" (UniqueName: \"kubernetes.io/projected/4aa2563c-6959-448c-9708-99f647cd24e1-kube-api-access-hdgj7\") pod \"swift-ring-rebalance-qcnvj\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:35 crc kubenswrapper[4888]: I1006 15:16:35.888829 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:36 crc kubenswrapper[4888]: I1006 15:16:36.205234 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26b90491-f5c9-42fd-b6a0-f21d2771566a","Type":"ContainerStarted","Data":"602260f9593644033c3a1d3ddc2db3337b22d999e592b9d7e6399173964e08c9"} Oct 06 15:16:36 crc kubenswrapper[4888]: I1006 15:16:36.205573 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26b90491-f5c9-42fd-b6a0-f21d2771566a","Type":"ContainerStarted","Data":"1c2cb3c07dbcde88ce8ba6f5a486b3593a0ac854aba47131b4accd887d5d939e"} Oct 06 15:16:36 crc kubenswrapper[4888]: I1006 15:16:36.228927 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.951291619 podStartE2EDuration="4.228910517s" podCreationTimestamp="2025-10-06 15:16:32 +0000 UTC" firstStartedPulling="2025-10-06 15:16:33.718144501 +0000 UTC m=+933.530495219" lastFinishedPulling="2025-10-06 15:16:34.995763399 +0000 UTC m=+934.808114117" observedRunningTime="2025-10-06 15:16:36.226442209 +0000 UTC m=+936.038792937" watchObservedRunningTime="2025-10-06 15:16:36.228910517 +0000 UTC m=+936.041261235" Oct 06 15:16:36 crc kubenswrapper[4888]: I1006 15:16:36.383615 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qcnvj"] Oct 06 15:16:36 crc kubenswrapper[4888]: W1006 15:16:36.391304 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4aa2563c_6959_448c_9708_99f647cd24e1.slice/crio-0192f72200f7bc9a66e136283bbc3180ab9ae7926d98013ba9d094f3e0d271f7 WatchSource:0}: Error finding container 0192f72200f7bc9a66e136283bbc3180ab9ae7926d98013ba9d094f3e0d271f7: Status 404 returned error can't find the container with id 0192f72200f7bc9a66e136283bbc3180ab9ae7926d98013ba9d094f3e0d271f7 Oct 06 15:16:37 crc kubenswrapper[4888]: I1006 15:16:37.234502 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qcnvj" event={"ID":"4aa2563c-6959-448c-9708-99f647cd24e1","Type":"ContainerStarted","Data":"0192f72200f7bc9a66e136283bbc3180ab9ae7926d98013ba9d094f3e0d271f7"} Oct 06 15:16:37 crc kubenswrapper[4888]: I1006 15:16:37.235256 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 06 15:16:37 crc kubenswrapper[4888]: I1006 15:16:37.876505 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 06 15:16:37 crc kubenswrapper[4888]: I1006 15:16:37.878076 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 06 15:16:37 crc kubenswrapper[4888]: I1006 15:16:37.950671 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 06 15:16:38 crc kubenswrapper[4888]: I1006 15:16:38.291001 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 06 15:16:38 crc kubenswrapper[4888]: I1006 15:16:38.524687 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 06 15:16:38 crc kubenswrapper[4888]: I1006 15:16:38.524967 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 06 15:16:38 crc kubenswrapper[4888]: I1006 15:16:38.571543 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 06 15:16:39 crc kubenswrapper[4888]: I1006 15:16:39.297773 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 06 15:16:39 crc kubenswrapper[4888]: I1006 15:16:39.709839 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:39 crc kubenswrapper[4888]: E1006 15:16:39.710127 4888 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 15:16:39 crc kubenswrapper[4888]: E1006 15:16:39.710151 4888 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 15:16:39 crc kubenswrapper[4888]: E1006 15:16:39.710221 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift podName:277682ba-0d72-43d5-b52c-59f6b02b2963 nodeName:}" failed. No retries permitted until 2025-10-06 15:16:47.710203447 +0000 UTC m=+947.522554165 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift") pod "swift-storage-0" (UID: "277682ba-0d72-43d5-b52c-59f6b02b2963") : configmap "swift-ring-files" not found Oct 06 15:16:41 crc kubenswrapper[4888]: I1006 15:16:41.681146 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:42 crc kubenswrapper[4888]: I1006 15:16:42.274246 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qcnvj" event={"ID":"4aa2563c-6959-448c-9708-99f647cd24e1","Type":"ContainerStarted","Data":"ef89074ae5630e36224d0ddf722bfb42a78ad54e1b187d6546501c2aab16dc77"} Oct 06 15:16:42 crc kubenswrapper[4888]: I1006 15:16:42.298355 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qcnvj" podStartSLOduration=2.482814047 podStartE2EDuration="7.298334586s" podCreationTimestamp="2025-10-06 15:16:35 +0000 UTC" firstStartedPulling="2025-10-06 15:16:36.393486659 +0000 UTC m=+936.205837377" lastFinishedPulling="2025-10-06 15:16:41.209007198 +0000 UTC m=+941.021357916" observedRunningTime="2025-10-06 15:16:42.29150485 +0000 UTC m=+942.103855588" watchObservedRunningTime="2025-10-06 15:16:42.298334586 +0000 UTC m=+942.110685304" Oct 06 15:16:42 crc kubenswrapper[4888]: I1006 15:16:42.664956 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:16:42 crc kubenswrapper[4888]: I1006 15:16:42.732708 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xhhd2"] Oct 06 15:16:42 crc kubenswrapper[4888]: I1006 15:16:42.732987 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" podUID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" containerName="dnsmasq-dns" containerID="cri-o://95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91" gracePeriod=10 Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.196141 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.278491 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-dns-svc\") pod \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.278538 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-ovsdbserver-nb\") pod \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.278649 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-config\") pod \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.278790 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2swb\" (UniqueName: \"kubernetes.io/projected/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-kube-api-access-p2swb\") pod \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\" (UID: \"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7\") " Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.284594 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-kube-api-access-p2swb" (OuterVolumeSpecName: "kube-api-access-p2swb") pod "e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" (UID: "e6aa7e91-cf5b-4d19-8047-a19c1f6195e7"). InnerVolumeSpecName "kube-api-access-p2swb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.286976 4888 generic.go:334] "Generic (PLEG): container finished" podID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" containerID="95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91" exitCode=0 Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.287961 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.288744 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" event={"ID":"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7","Type":"ContainerDied","Data":"95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91"} Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.288779 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-xhhd2" event={"ID":"e6aa7e91-cf5b-4d19-8047-a19c1f6195e7","Type":"ContainerDied","Data":"cdc8450157323d76c38bf987db918712fe2da0a1ce93e57ca7a4573b6ba3d5f7"} Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.288814 4888 scope.go:117] "RemoveContainer" containerID="95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.325575 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-config" (OuterVolumeSpecName: "config") pod "e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" (UID: "e6aa7e91-cf5b-4d19-8047-a19c1f6195e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.339961 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" (UID: "e6aa7e91-cf5b-4d19-8047-a19c1f6195e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.347288 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" (UID: "e6aa7e91-cf5b-4d19-8047-a19c1f6195e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.374367 4888 scope.go:117] "RemoveContainer" containerID="b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.381159 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2swb\" (UniqueName: \"kubernetes.io/projected/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-kube-api-access-p2swb\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.381191 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.381200 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.381209 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.396938 4888 scope.go:117] "RemoveContainer" containerID="95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91" Oct 06 15:16:43 crc kubenswrapper[4888]: E1006 15:16:43.397530 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91\": container with ID starting with 95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91 not found: ID does not exist" containerID="95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.397560 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91"} err="failed to get container status \"95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91\": rpc error: code = NotFound desc = could not find container \"95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91\": container with ID starting with 95f7ac48b11c8603e485ce3caafe8ed8e328ad7ad8a14eb5c0ebbc8b5fc7ae91 not found: ID does not exist" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.397581 4888 scope.go:117] "RemoveContainer" containerID="b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03" Oct 06 15:16:43 crc kubenswrapper[4888]: E1006 15:16:43.398106 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03\": container with ID starting with b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03 not found: ID does not exist" containerID="b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.398200 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03"} err="failed to get container status \"b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03\": rpc error: code = NotFound desc = could not find container \"b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03\": container with ID starting with b06853d6708718fae6f3bfefa70238fa66b451ae0d69a265e67da52f7cca9c03 not found: ID does not exist" Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.625693 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xhhd2"] Oct 06 15:16:43 crc kubenswrapper[4888]: I1006 15:16:43.633434 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-xhhd2"] Oct 06 15:16:44 crc kubenswrapper[4888]: I1006 15:16:44.934491 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" path="/var/lib/kubelet/pods/e6aa7e91-cf5b-4d19-8047-a19c1f6195e7/volumes" Oct 06 15:16:47 crc kubenswrapper[4888]: I1006 15:16:47.757053 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:16:47 crc kubenswrapper[4888]: E1006 15:16:47.758685 4888 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 15:16:47 crc kubenswrapper[4888]: E1006 15:16:47.759067 4888 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 15:16:47 crc kubenswrapper[4888]: E1006 15:16:47.759242 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift podName:277682ba-0d72-43d5-b52c-59f6b02b2963 nodeName:}" failed. No retries permitted until 2025-10-06 15:17:03.759220425 +0000 UTC m=+963.571571153 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift") pod "swift-storage-0" (UID: "277682ba-0d72-43d5-b52c-59f6b02b2963") : configmap "swift-ring-files" not found Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.177750 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.242968 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5hmct"] Oct 06 15:16:48 crc kubenswrapper[4888]: E1006 15:16:48.243282 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" containerName="dnsmasq-dns" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.243298 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" containerName="dnsmasq-dns" Oct 06 15:16:48 crc kubenswrapper[4888]: E1006 15:16:48.243316 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" containerName="init" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.243322 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" containerName="init" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.243472 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6aa7e91-cf5b-4d19-8047-a19c1f6195e7" containerName="dnsmasq-dns" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.243980 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5hmct" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.264841 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5hmct"] Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.329309 4888 generic.go:334] "Generic (PLEG): container finished" podID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerID="bb989701414b929a31612dd68136f8326ca10cd6168097c538102b30298af31e" exitCode=0 Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.329355 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44ccc0c-19ed-4959-ac2c-46842cd27fc1","Type":"ContainerDied","Data":"bb989701414b929a31612dd68136f8326ca10cd6168097c538102b30298af31e"} Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.369357 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbwhf\" (UniqueName: \"kubernetes.io/projected/bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9-kube-api-access-nbwhf\") pod \"keystone-db-create-5hmct\" (UID: \"bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9\") " pod="openstack/keystone-db-create-5hmct" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.471689 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbwhf\" (UniqueName: \"kubernetes.io/projected/bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9-kube-api-access-nbwhf\") pod \"keystone-db-create-5hmct\" (UID: \"bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9\") " pod="openstack/keystone-db-create-5hmct" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.490983 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbwhf\" (UniqueName: \"kubernetes.io/projected/bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9-kube-api-access-nbwhf\") pod \"keystone-db-create-5hmct\" (UID: \"bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9\") " pod="openstack/keystone-db-create-5hmct" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.511494 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rx7h6"] Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.512693 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx7h6" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.528131 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rx7h6"] Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.566691 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5hmct" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.574016 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck98f\" (UniqueName: \"kubernetes.io/projected/66ea0efb-53b0-4fac-bb45-f01ce9b6430b-kube-api-access-ck98f\") pod \"placement-db-create-rx7h6\" (UID: \"66ea0efb-53b0-4fac-bb45-f01ce9b6430b\") " pod="openstack/placement-db-create-rx7h6" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.679818 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck98f\" (UniqueName: \"kubernetes.io/projected/66ea0efb-53b0-4fac-bb45-f01ce9b6430b-kube-api-access-ck98f\") pod \"placement-db-create-rx7h6\" (UID: \"66ea0efb-53b0-4fac-bb45-f01ce9b6430b\") " pod="openstack/placement-db-create-rx7h6" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.708435 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck98f\" (UniqueName: \"kubernetes.io/projected/66ea0efb-53b0-4fac-bb45-f01ce9b6430b-kube-api-access-ck98f\") pod \"placement-db-create-rx7h6\" (UID: \"66ea0efb-53b0-4fac-bb45-f01ce9b6430b\") " pod="openstack/placement-db-create-rx7h6" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.787112 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-52mbw"] Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.790225 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-52mbw" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.801388 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-52mbw"] Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.867527 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx7h6" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.886825 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzsbc\" (UniqueName: \"kubernetes.io/projected/f5b9041d-9cb2-4bd0-a57b-2a884add6fcc-kube-api-access-gzsbc\") pod \"glance-db-create-52mbw\" (UID: \"f5b9041d-9cb2-4bd0-a57b-2a884add6fcc\") " pod="openstack/glance-db-create-52mbw" Oct 06 15:16:48 crc kubenswrapper[4888]: I1006 15:16:48.988636 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzsbc\" (UniqueName: \"kubernetes.io/projected/f5b9041d-9cb2-4bd0-a57b-2a884add6fcc-kube-api-access-gzsbc\") pod \"glance-db-create-52mbw\" (UID: \"f5b9041d-9cb2-4bd0-a57b-2a884add6fcc\") " pod="openstack/glance-db-create-52mbw" Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.013196 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzsbc\" (UniqueName: \"kubernetes.io/projected/f5b9041d-9cb2-4bd0-a57b-2a884add6fcc-kube-api-access-gzsbc\") pod \"glance-db-create-52mbw\" (UID: \"f5b9041d-9cb2-4bd0-a57b-2a884add6fcc\") " pod="openstack/glance-db-create-52mbw" Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.092643 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5hmct"] Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.119464 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-52mbw" Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.331388 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rx7h6"] Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.367725 4888 generic.go:334] "Generic (PLEG): container finished" podID="4aa2563c-6959-448c-9708-99f647cd24e1" containerID="ef89074ae5630e36224d0ddf722bfb42a78ad54e1b187d6546501c2aab16dc77" exitCode=0 Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.367828 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qcnvj" event={"ID":"4aa2563c-6959-448c-9708-99f647cd24e1","Type":"ContainerDied","Data":"ef89074ae5630e36224d0ddf722bfb42a78ad54e1b187d6546501c2aab16dc77"} Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.384575 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44ccc0c-19ed-4959-ac2c-46842cd27fc1","Type":"ContainerStarted","Data":"35e18882845af9a833ef7e3280ef021b3add404135d569827de8904efe2282ca"} Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.384906 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.399084 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5hmct" event={"ID":"bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9","Type":"ContainerStarted","Data":"b049044b358809a98753016303c996b903f541f2e8cd299fc58bc739475bd698"} Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.441332 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.350456684 podStartE2EDuration="1m6.441303992s" podCreationTimestamp="2025-10-06 15:15:43 +0000 UTC" firstStartedPulling="2025-10-06 15:15:45.811112325 +0000 UTC m=+885.623463053" lastFinishedPulling="2025-10-06 15:16:13.901959643 +0000 UTC m=+913.714310361" observedRunningTime="2025-10-06 15:16:49.431917996 +0000 UTC m=+949.244268734" watchObservedRunningTime="2025-10-06 15:16:49.441303992 +0000 UTC m=+949.253654710" Oct 06 15:16:49 crc kubenswrapper[4888]: I1006 15:16:49.471288 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-52mbw"] Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.408871 4888 generic.go:334] "Generic (PLEG): container finished" podID="bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9" containerID="731acb55a9a88edf723d3f5a04eacdb75a6bec8a6dca1c5add4804d9b79ffab4" exitCode=0 Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.408959 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5hmct" event={"ID":"bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9","Type":"ContainerDied","Data":"731acb55a9a88edf723d3f5a04eacdb75a6bec8a6dca1c5add4804d9b79ffab4"} Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.411566 4888 generic.go:334] "Generic (PLEG): container finished" podID="f5b9041d-9cb2-4bd0-a57b-2a884add6fcc" containerID="5c7412460657d618d8de7270fe20adc30ee6a2b188d7e60c0e74dd8f760228d8" exitCode=0 Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.411638 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-52mbw" event={"ID":"f5b9041d-9cb2-4bd0-a57b-2a884add6fcc","Type":"ContainerDied","Data":"5c7412460657d618d8de7270fe20adc30ee6a2b188d7e60c0e74dd8f760228d8"} Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.411688 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-52mbw" event={"ID":"f5b9041d-9cb2-4bd0-a57b-2a884add6fcc","Type":"ContainerStarted","Data":"785e5db0556b6c88b8bec151f4b10e2d3c54e77d887546770ef8ba02856c6c04"} Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.413563 4888 generic.go:334] "Generic (PLEG): container finished" podID="66ea0efb-53b0-4fac-bb45-f01ce9b6430b" containerID="bd3eb933b88d850edff3606c9aa377b0195929dce7ee87dc9993cf7e88d1988d" exitCode=0 Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.413646 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx7h6" event={"ID":"66ea0efb-53b0-4fac-bb45-f01ce9b6430b","Type":"ContainerDied","Data":"bd3eb933b88d850edff3606c9aa377b0195929dce7ee87dc9993cf7e88d1988d"} Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.413684 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx7h6" event={"ID":"66ea0efb-53b0-4fac-bb45-f01ce9b6430b","Type":"ContainerStarted","Data":"d4674a1d5ef31ba210cbb728e4a17e2b45ece6ab7a3ca6a9071b58348c0a0509"} Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.725365 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.819490 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdgj7\" (UniqueName: \"kubernetes.io/projected/4aa2563c-6959-448c-9708-99f647cd24e1-kube-api-access-hdgj7\") pod \"4aa2563c-6959-448c-9708-99f647cd24e1\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.819550 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-combined-ca-bundle\") pod \"4aa2563c-6959-448c-9708-99f647cd24e1\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.819573 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-ring-data-devices\") pod \"4aa2563c-6959-448c-9708-99f647cd24e1\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.819612 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4aa2563c-6959-448c-9708-99f647cd24e1-etc-swift\") pod \"4aa2563c-6959-448c-9708-99f647cd24e1\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.819649 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-swiftconf\") pod \"4aa2563c-6959-448c-9708-99f647cd24e1\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.819742 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-dispersionconf\") pod \"4aa2563c-6959-448c-9708-99f647cd24e1\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.819762 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-scripts\") pod \"4aa2563c-6959-448c-9708-99f647cd24e1\" (UID: \"4aa2563c-6959-448c-9708-99f647cd24e1\") " Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.820285 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4aa2563c-6959-448c-9708-99f647cd24e1" (UID: "4aa2563c-6959-448c-9708-99f647cd24e1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.821174 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aa2563c-6959-448c-9708-99f647cd24e1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4aa2563c-6959-448c-9708-99f647cd24e1" (UID: "4aa2563c-6959-448c-9708-99f647cd24e1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.826744 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa2563c-6959-448c-9708-99f647cd24e1-kube-api-access-hdgj7" (OuterVolumeSpecName: "kube-api-access-hdgj7") pod "4aa2563c-6959-448c-9708-99f647cd24e1" (UID: "4aa2563c-6959-448c-9708-99f647cd24e1"). InnerVolumeSpecName "kube-api-access-hdgj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.832557 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4aa2563c-6959-448c-9708-99f647cd24e1" (UID: "4aa2563c-6959-448c-9708-99f647cd24e1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.843404 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-scripts" (OuterVolumeSpecName: "scripts") pod "4aa2563c-6959-448c-9708-99f647cd24e1" (UID: "4aa2563c-6959-448c-9708-99f647cd24e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.844971 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4aa2563c-6959-448c-9708-99f647cd24e1" (UID: "4aa2563c-6959-448c-9708-99f647cd24e1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.848447 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4aa2563c-6959-448c-9708-99f647cd24e1" (UID: "4aa2563c-6959-448c-9708-99f647cd24e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.923978 4888 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.924013 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.924026 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdgj7\" (UniqueName: \"kubernetes.io/projected/4aa2563c-6959-448c-9708-99f647cd24e1-kube-api-access-hdgj7\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.924038 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.924050 4888 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4aa2563c-6959-448c-9708-99f647cd24e1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.924061 4888 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4aa2563c-6959-448c-9708-99f647cd24e1-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:50 crc kubenswrapper[4888]: I1006 15:16:50.924073 4888 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4aa2563c-6959-448c-9708-99f647cd24e1-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.422572 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qcnvj" event={"ID":"4aa2563c-6959-448c-9708-99f647cd24e1","Type":"ContainerDied","Data":"0192f72200f7bc9a66e136283bbc3180ab9ae7926d98013ba9d094f3e0d271f7"} Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.422650 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0192f72200f7bc9a66e136283bbc3180ab9ae7926d98013ba9d094f3e0d271f7" Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.422770 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qcnvj" Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.867790 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-52mbw" Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.875812 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx7h6" Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.885656 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5hmct" Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.940431 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck98f\" (UniqueName: \"kubernetes.io/projected/66ea0efb-53b0-4fac-bb45-f01ce9b6430b-kube-api-access-ck98f\") pod \"66ea0efb-53b0-4fac-bb45-f01ce9b6430b\" (UID: \"66ea0efb-53b0-4fac-bb45-f01ce9b6430b\") " Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.940527 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzsbc\" (UniqueName: \"kubernetes.io/projected/f5b9041d-9cb2-4bd0-a57b-2a884add6fcc-kube-api-access-gzsbc\") pod \"f5b9041d-9cb2-4bd0-a57b-2a884add6fcc\" (UID: \"f5b9041d-9cb2-4bd0-a57b-2a884add6fcc\") " Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.940719 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbwhf\" (UniqueName: \"kubernetes.io/projected/bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9-kube-api-access-nbwhf\") pod \"bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9\" (UID: \"bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9\") " Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.946130 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ea0efb-53b0-4fac-bb45-f01ce9b6430b-kube-api-access-ck98f" (OuterVolumeSpecName: "kube-api-access-ck98f") pod "66ea0efb-53b0-4fac-bb45-f01ce9b6430b" (UID: "66ea0efb-53b0-4fac-bb45-f01ce9b6430b"). InnerVolumeSpecName "kube-api-access-ck98f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.946190 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b9041d-9cb2-4bd0-a57b-2a884add6fcc-kube-api-access-gzsbc" (OuterVolumeSpecName: "kube-api-access-gzsbc") pod "f5b9041d-9cb2-4bd0-a57b-2a884add6fcc" (UID: "f5b9041d-9cb2-4bd0-a57b-2a884add6fcc"). InnerVolumeSpecName "kube-api-access-gzsbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:51 crc kubenswrapper[4888]: I1006 15:16:51.946589 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9-kube-api-access-nbwhf" (OuterVolumeSpecName: "kube-api-access-nbwhf") pod "bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9" (UID: "bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9"). InnerVolumeSpecName "kube-api-access-nbwhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.043096 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbwhf\" (UniqueName: \"kubernetes.io/projected/bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9-kube-api-access-nbwhf\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.043328 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck98f\" (UniqueName: \"kubernetes.io/projected/66ea0efb-53b0-4fac-bb45-f01ce9b6430b-kube-api-access-ck98f\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.043426 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzsbc\" (UniqueName: \"kubernetes.io/projected/f5b9041d-9cb2-4bd0-a57b-2a884add6fcc-kube-api-access-gzsbc\") on node \"crc\" DevicePath \"\"" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.429144 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-52mbw" event={"ID":"f5b9041d-9cb2-4bd0-a57b-2a884add6fcc","Type":"ContainerDied","Data":"785e5db0556b6c88b8bec151f4b10e2d3c54e77d887546770ef8ba02856c6c04"} Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.429692 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785e5db0556b6c88b8bec151f4b10e2d3c54e77d887546770ef8ba02856c6c04" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.429838 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-52mbw" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.433687 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rx7h6" event={"ID":"66ea0efb-53b0-4fac-bb45-f01ce9b6430b","Type":"ContainerDied","Data":"d4674a1d5ef31ba210cbb728e4a17e2b45ece6ab7a3ca6a9071b58348c0a0509"} Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.433712 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4674a1d5ef31ba210cbb728e4a17e2b45ece6ab7a3ca6a9071b58348c0a0509" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.433724 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rx7h6" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.435230 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5hmct" event={"ID":"bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9","Type":"ContainerDied","Data":"b049044b358809a98753016303c996b903f541f2e8cd299fc58bc739475bd698"} Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.435292 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b049044b358809a98753016303c996b903f541f2e8cd299fc58bc739475bd698" Oct 06 15:16:52 crc kubenswrapper[4888]: I1006 15:16:52.435371 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5hmct" Oct 06 15:16:57 crc kubenswrapper[4888]: I1006 15:16:57.960915 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nzvpg" podUID="82705879-10de-4927-946c-c55766069d1b" containerName="ovn-controller" probeResult="failure" output=< Oct 06 15:16:57 crc kubenswrapper[4888]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 06 15:16:57 crc kubenswrapper[4888]: > Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.025350 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.027896 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mczwz" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.255656 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nzvpg-config-bncv2"] Oct 06 15:16:58 crc kubenswrapper[4888]: E1006 15:16:58.256117 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ea0efb-53b0-4fac-bb45-f01ce9b6430b" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.256141 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ea0efb-53b0-4fac-bb45-f01ce9b6430b" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: E1006 15:16:58.256166 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.256174 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: E1006 15:16:58.256198 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b9041d-9cb2-4bd0-a57b-2a884add6fcc" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.256207 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b9041d-9cb2-4bd0-a57b-2a884add6fcc" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: E1006 15:16:58.256224 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa2563c-6959-448c-9708-99f647cd24e1" containerName="swift-ring-rebalance" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.256233 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa2563c-6959-448c-9708-99f647cd24e1" containerName="swift-ring-rebalance" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.256457 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa2563c-6959-448c-9708-99f647cd24e1" containerName="swift-ring-rebalance" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.256474 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b9041d-9cb2-4bd0-a57b-2a884add6fcc" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.256493 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.256504 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ea0efb-53b0-4fac-bb45-f01ce9b6430b" containerName="mariadb-database-create" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.257140 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.259848 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.273916 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nzvpg-config-bncv2"] Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.343326 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-additional-scripts\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.343419 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.343453 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-log-ovn\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.343489 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-scripts\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.343549 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run-ovn\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.343600 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgdbs\" (UniqueName: \"kubernetes.io/projected/7c542eef-229e-4e21-a37a-91d75ff12783-kube-api-access-kgdbs\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.363973 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-89a8-account-create-2p29h"] Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.365243 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89a8-account-create-2p29h" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.367048 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.375705 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-89a8-account-create-2p29h"] Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.444877 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.444942 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-log-ovn\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.444995 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-scripts\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.445055 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47x5t\" (UniqueName: \"kubernetes.io/projected/023135e5-d2fb-4bd2-a8b7-03b214c7c81f-kube-api-access-47x5t\") pod \"keystone-89a8-account-create-2p29h\" (UID: \"023135e5-d2fb-4bd2-a8b7-03b214c7c81f\") " pod="openstack/keystone-89a8-account-create-2p29h" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.445085 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run-ovn\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.445126 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgdbs\" (UniqueName: \"kubernetes.io/projected/7c542eef-229e-4e21-a37a-91d75ff12783-kube-api-access-kgdbs\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.445201 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-additional-scripts\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.446268 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-additional-scripts\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.447593 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run-ovn\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.447806 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-scripts\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.447927 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-log-ovn\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.447935 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.472907 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgdbs\" (UniqueName: \"kubernetes.io/projected/7c542eef-229e-4e21-a37a-91d75ff12783-kube-api-access-kgdbs\") pod \"ovn-controller-nzvpg-config-bncv2\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.546122 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47x5t\" (UniqueName: \"kubernetes.io/projected/023135e5-d2fb-4bd2-a8b7-03b214c7c81f-kube-api-access-47x5t\") pod \"keystone-89a8-account-create-2p29h\" (UID: \"023135e5-d2fb-4bd2-a8b7-03b214c7c81f\") " pod="openstack/keystone-89a8-account-create-2p29h" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.568667 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47x5t\" (UniqueName: \"kubernetes.io/projected/023135e5-d2fb-4bd2-a8b7-03b214c7c81f-kube-api-access-47x5t\") pod \"keystone-89a8-account-create-2p29h\" (UID: \"023135e5-d2fb-4bd2-a8b7-03b214c7c81f\") " pod="openstack/keystone-89a8-account-create-2p29h" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.579192 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.620015 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7b73-account-create-wmpks"] Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.621344 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b73-account-create-wmpks" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.625297 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.652284 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vlhh\" (UniqueName: \"kubernetes.io/projected/bb82a472-d981-4b66-8101-ebf6ca21b88b-kube-api-access-8vlhh\") pod \"placement-7b73-account-create-wmpks\" (UID: \"bb82a472-d981-4b66-8101-ebf6ca21b88b\") " pod="openstack/placement-7b73-account-create-wmpks" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.656669 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7b73-account-create-wmpks"] Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.691531 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89a8-account-create-2p29h" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.754889 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vlhh\" (UniqueName: \"kubernetes.io/projected/bb82a472-d981-4b66-8101-ebf6ca21b88b-kube-api-access-8vlhh\") pod \"placement-7b73-account-create-wmpks\" (UID: \"bb82a472-d981-4b66-8101-ebf6ca21b88b\") " pod="openstack/placement-7b73-account-create-wmpks" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.777365 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vlhh\" (UniqueName: \"kubernetes.io/projected/bb82a472-d981-4b66-8101-ebf6ca21b88b-kube-api-access-8vlhh\") pod \"placement-7b73-account-create-wmpks\" (UID: \"bb82a472-d981-4b66-8101-ebf6ca21b88b\") " pod="openstack/placement-7b73-account-create-wmpks" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.789938 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b73-account-create-wmpks" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.885834 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9a2f-account-create-mcsms"] Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.891231 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9a2f-account-create-mcsms" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.897554 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.911598 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9a2f-account-create-mcsms"] Oct 06 15:16:58 crc kubenswrapper[4888]: I1006 15:16:58.963792 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdwk\" (UniqueName: \"kubernetes.io/projected/f3443d75-b692-4b7f-86fb-3293f1b817c8-kube-api-access-gxdwk\") pod \"glance-9a2f-account-create-mcsms\" (UID: \"f3443d75-b692-4b7f-86fb-3293f1b817c8\") " pod="openstack/glance-9a2f-account-create-mcsms" Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.067420 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdwk\" (UniqueName: \"kubernetes.io/projected/f3443d75-b692-4b7f-86fb-3293f1b817c8-kube-api-access-gxdwk\") pod \"glance-9a2f-account-create-mcsms\" (UID: \"f3443d75-b692-4b7f-86fb-3293f1b817c8\") " pod="openstack/glance-9a2f-account-create-mcsms" Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.088565 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdwk\" (UniqueName: \"kubernetes.io/projected/f3443d75-b692-4b7f-86fb-3293f1b817c8-kube-api-access-gxdwk\") pod \"glance-9a2f-account-create-mcsms\" (UID: \"f3443d75-b692-4b7f-86fb-3293f1b817c8\") " pod="openstack/glance-9a2f-account-create-mcsms" Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.264124 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9a2f-account-create-mcsms" Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.287366 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nzvpg-config-bncv2"] Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.403781 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-89a8-account-create-2p29h"] Oct 06 15:16:59 crc kubenswrapper[4888]: W1006 15:16:59.414071 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod023135e5_d2fb_4bd2_a8b7_03b214c7c81f.slice/crio-73ad28e1c7f7d30cfd58e2f9f081982dbe7b85f440c0c3fe7a53a3a1a6702ccb WatchSource:0}: Error finding container 73ad28e1c7f7d30cfd58e2f9f081982dbe7b85f440c0c3fe7a53a3a1a6702ccb: Status 404 returned error can't find the container with id 73ad28e1c7f7d30cfd58e2f9f081982dbe7b85f440c0c3fe7a53a3a1a6702ccb Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.454563 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7b73-account-create-wmpks"] Oct 06 15:16:59 crc kubenswrapper[4888]: W1006 15:16:59.460190 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb82a472_d981_4b66_8101_ebf6ca21b88b.slice/crio-d88eef9b75eb2c7b12ffb14bee0913624e739e377a6594d3940a6718646ecab8 WatchSource:0}: Error finding container d88eef9b75eb2c7b12ffb14bee0913624e739e377a6594d3940a6718646ecab8: Status 404 returned error can't find the container with id d88eef9b75eb2c7b12ffb14bee0913624e739e377a6594d3940a6718646ecab8 Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.496732 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b73-account-create-wmpks" event={"ID":"bb82a472-d981-4b66-8101-ebf6ca21b88b","Type":"ContainerStarted","Data":"d88eef9b75eb2c7b12ffb14bee0913624e739e377a6594d3940a6718646ecab8"} Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.498395 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89a8-account-create-2p29h" event={"ID":"023135e5-d2fb-4bd2-a8b7-03b214c7c81f","Type":"ContainerStarted","Data":"73ad28e1c7f7d30cfd58e2f9f081982dbe7b85f440c0c3fe7a53a3a1a6702ccb"} Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.499228 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nzvpg-config-bncv2" event={"ID":"7c542eef-229e-4e21-a37a-91d75ff12783","Type":"ContainerStarted","Data":"244340d02144c54831cd25e119a108324b135f3ed7a9864388674118d0aec7e1"} Oct 06 15:16:59 crc kubenswrapper[4888]: I1006 15:16:59.792400 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9a2f-account-create-mcsms"] Oct 06 15:16:59 crc kubenswrapper[4888]: W1006 15:16:59.801893 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3443d75_b692_4b7f_86fb_3293f1b817c8.slice/crio-be8bf770b7b6074b8a876f08adaf736e227346e2b11c00153ddd90abcf8742e8 WatchSource:0}: Error finding container be8bf770b7b6074b8a876f08adaf736e227346e2b11c00153ddd90abcf8742e8: Status 404 returned error can't find the container with id be8bf770b7b6074b8a876f08adaf736e227346e2b11c00153ddd90abcf8742e8 Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.521130 4888 generic.go:334] "Generic (PLEG): container finished" podID="023135e5-d2fb-4bd2-a8b7-03b214c7c81f" containerID="260f21e7cf7c4201a96ce0413be2f1548ae7f4f1b89855620c48739cb78b2867" exitCode=0 Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.521539 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89a8-account-create-2p29h" event={"ID":"023135e5-d2fb-4bd2-a8b7-03b214c7c81f","Type":"ContainerDied","Data":"260f21e7cf7c4201a96ce0413be2f1548ae7f4f1b89855620c48739cb78b2867"} Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.525044 4888 generic.go:334] "Generic (PLEG): container finished" podID="f3443d75-b692-4b7f-86fb-3293f1b817c8" containerID="b14e679cc89b6b8627f7fbc23317dc24ca2d38472f8c83484cb6c0da73708589" exitCode=0 Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.525342 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9a2f-account-create-mcsms" event={"ID":"f3443d75-b692-4b7f-86fb-3293f1b817c8","Type":"ContainerDied","Data":"b14e679cc89b6b8627f7fbc23317dc24ca2d38472f8c83484cb6c0da73708589"} Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.525381 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9a2f-account-create-mcsms" event={"ID":"f3443d75-b692-4b7f-86fb-3293f1b817c8","Type":"ContainerStarted","Data":"be8bf770b7b6074b8a876f08adaf736e227346e2b11c00153ddd90abcf8742e8"} Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.526853 4888 generic.go:334] "Generic (PLEG): container finished" podID="7c542eef-229e-4e21-a37a-91d75ff12783" containerID="eaf1a839f33ce516252f305ec59d0e594303dc44149838e76a5ea3142ed60c6a" exitCode=0 Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.526980 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nzvpg-config-bncv2" event={"ID":"7c542eef-229e-4e21-a37a-91d75ff12783","Type":"ContainerDied","Data":"eaf1a839f33ce516252f305ec59d0e594303dc44149838e76a5ea3142ed60c6a"} Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.528380 4888 generic.go:334] "Generic (PLEG): container finished" podID="bb82a472-d981-4b66-8101-ebf6ca21b88b" containerID="56b5ec5518b02ae97ce4d12bcffa6537c797d717d17556d70138bf55a39bd1dc" exitCode=0 Oct 06 15:17:00 crc kubenswrapper[4888]: I1006 15:17:00.528414 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b73-account-create-wmpks" event={"ID":"bb82a472-d981-4b66-8101-ebf6ca21b88b","Type":"ContainerDied","Data":"56b5ec5518b02ae97ce4d12bcffa6537c797d717d17556d70138bf55a39bd1dc"} Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.877195 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.917004 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run\") pod \"7c542eef-229e-4e21-a37a-91d75ff12783\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.917109 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-additional-scripts\") pod \"7c542eef-229e-4e21-a37a-91d75ff12783\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.917175 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-log-ovn\") pod \"7c542eef-229e-4e21-a37a-91d75ff12783\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.917274 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-scripts\") pod \"7c542eef-229e-4e21-a37a-91d75ff12783\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.917326 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run-ovn\") pod \"7c542eef-229e-4e21-a37a-91d75ff12783\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.917351 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgdbs\" (UniqueName: \"kubernetes.io/projected/7c542eef-229e-4e21-a37a-91d75ff12783-kube-api-access-kgdbs\") pod \"7c542eef-229e-4e21-a37a-91d75ff12783\" (UID: \"7c542eef-229e-4e21-a37a-91d75ff12783\") " Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.917471 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run" (OuterVolumeSpecName: "var-run") pod "7c542eef-229e-4e21-a37a-91d75ff12783" (UID: "7c542eef-229e-4e21-a37a-91d75ff12783"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.917837 4888 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.918370 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7c542eef-229e-4e21-a37a-91d75ff12783" (UID: "7c542eef-229e-4e21-a37a-91d75ff12783"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.919048 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7c542eef-229e-4e21-a37a-91d75ff12783" (UID: "7c542eef-229e-4e21-a37a-91d75ff12783"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.919100 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7c542eef-229e-4e21-a37a-91d75ff12783" (UID: "7c542eef-229e-4e21-a37a-91d75ff12783"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.919153 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-scripts" (OuterVolumeSpecName: "scripts") pod "7c542eef-229e-4e21-a37a-91d75ff12783" (UID: "7c542eef-229e-4e21-a37a-91d75ff12783"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:01 crc kubenswrapper[4888]: I1006 15:17:01.934216 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c542eef-229e-4e21-a37a-91d75ff12783-kube-api-access-kgdbs" (OuterVolumeSpecName: "kube-api-access-kgdbs") pod "7c542eef-229e-4e21-a37a-91d75ff12783" (UID: "7c542eef-229e-4e21-a37a-91d75ff12783"). InnerVolumeSpecName "kube-api-access-kgdbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.019445 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.019540 4888 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.019553 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgdbs\" (UniqueName: \"kubernetes.io/projected/7c542eef-229e-4e21-a37a-91d75ff12783-kube-api-access-kgdbs\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.019565 4888 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c542eef-229e-4e21-a37a-91d75ff12783-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.019576 4888 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c542eef-229e-4e21-a37a-91d75ff12783-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.038846 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9a2f-account-create-mcsms" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.045888 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b73-account-create-wmpks" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.064960 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89a8-account-create-2p29h" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.121638 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vlhh\" (UniqueName: \"kubernetes.io/projected/bb82a472-d981-4b66-8101-ebf6ca21b88b-kube-api-access-8vlhh\") pod \"bb82a472-d981-4b66-8101-ebf6ca21b88b\" (UID: \"bb82a472-d981-4b66-8101-ebf6ca21b88b\") " Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.127342 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47x5t\" (UniqueName: \"kubernetes.io/projected/023135e5-d2fb-4bd2-a8b7-03b214c7c81f-kube-api-access-47x5t\") pod \"023135e5-d2fb-4bd2-a8b7-03b214c7c81f\" (UID: \"023135e5-d2fb-4bd2-a8b7-03b214c7c81f\") " Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.127447 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxdwk\" (UniqueName: \"kubernetes.io/projected/f3443d75-b692-4b7f-86fb-3293f1b817c8-kube-api-access-gxdwk\") pod \"f3443d75-b692-4b7f-86fb-3293f1b817c8\" (UID: \"f3443d75-b692-4b7f-86fb-3293f1b817c8\") " Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.129851 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb82a472-d981-4b66-8101-ebf6ca21b88b-kube-api-access-8vlhh" (OuterVolumeSpecName: "kube-api-access-8vlhh") pod "bb82a472-d981-4b66-8101-ebf6ca21b88b" (UID: "bb82a472-d981-4b66-8101-ebf6ca21b88b"). InnerVolumeSpecName "kube-api-access-8vlhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.131665 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3443d75-b692-4b7f-86fb-3293f1b817c8-kube-api-access-gxdwk" (OuterVolumeSpecName: "kube-api-access-gxdwk") pod "f3443d75-b692-4b7f-86fb-3293f1b817c8" (UID: "f3443d75-b692-4b7f-86fb-3293f1b817c8"). InnerVolumeSpecName "kube-api-access-gxdwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.139950 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023135e5-d2fb-4bd2-a8b7-03b214c7c81f-kube-api-access-47x5t" (OuterVolumeSpecName: "kube-api-access-47x5t") pod "023135e5-d2fb-4bd2-a8b7-03b214c7c81f" (UID: "023135e5-d2fb-4bd2-a8b7-03b214c7c81f"). InnerVolumeSpecName "kube-api-access-47x5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.230088 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vlhh\" (UniqueName: \"kubernetes.io/projected/bb82a472-d981-4b66-8101-ebf6ca21b88b-kube-api-access-8vlhh\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.230126 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47x5t\" (UniqueName: \"kubernetes.io/projected/023135e5-d2fb-4bd2-a8b7-03b214c7c81f-kube-api-access-47x5t\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.230141 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxdwk\" (UniqueName: \"kubernetes.io/projected/f3443d75-b692-4b7f-86fb-3293f1b817c8-kube-api-access-gxdwk\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.543550 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9a2f-account-create-mcsms" event={"ID":"f3443d75-b692-4b7f-86fb-3293f1b817c8","Type":"ContainerDied","Data":"be8bf770b7b6074b8a876f08adaf736e227346e2b11c00153ddd90abcf8742e8"} Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.543594 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8bf770b7b6074b8a876f08adaf736e227346e2b11c00153ddd90abcf8742e8" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.543625 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9a2f-account-create-mcsms" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.545042 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nzvpg-config-bncv2" event={"ID":"7c542eef-229e-4e21-a37a-91d75ff12783","Type":"ContainerDied","Data":"244340d02144c54831cd25e119a108324b135f3ed7a9864388674118d0aec7e1"} Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.545082 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="244340d02144c54831cd25e119a108324b135f3ed7a9864388674118d0aec7e1" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.545193 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nzvpg-config-bncv2" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.546250 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7b73-account-create-wmpks" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.546344 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7b73-account-create-wmpks" event={"ID":"bb82a472-d981-4b66-8101-ebf6ca21b88b","Type":"ContainerDied","Data":"d88eef9b75eb2c7b12ffb14bee0913624e739e377a6594d3940a6718646ecab8"} Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.546468 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d88eef9b75eb2c7b12ffb14bee0913624e739e377a6594d3940a6718646ecab8" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.547422 4888 generic.go:334] "Generic (PLEG): container finished" podID="91ed3909-71e7-40e7-9e97-e9917d621080" containerID="b5a72bf36651f8d766603c0c96d06ec8573e06712ce58396c5afbfef3f771a97" exitCode=0 Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.547533 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ed3909-71e7-40e7-9e97-e9917d621080","Type":"ContainerDied","Data":"b5a72bf36651f8d766603c0c96d06ec8573e06712ce58396c5afbfef3f771a97"} Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.548353 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89a8-account-create-2p29h" event={"ID":"023135e5-d2fb-4bd2-a8b7-03b214c7c81f","Type":"ContainerDied","Data":"73ad28e1c7f7d30cfd58e2f9f081982dbe7b85f440c0c3fe7a53a3a1a6702ccb"} Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.549461 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73ad28e1c7f7d30cfd58e2f9f081982dbe7b85f440c0c3fe7a53a3a1a6702ccb" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.548483 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89a8-account-create-2p29h" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.971862 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nzvpg" Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.984076 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nzvpg-config-bncv2"] Oct 06 15:17:02 crc kubenswrapper[4888]: I1006 15:17:02.990134 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nzvpg-config-bncv2"] Oct 06 15:17:03 crc kubenswrapper[4888]: I1006 15:17:03.557762 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ed3909-71e7-40e7-9e97-e9917d621080","Type":"ContainerStarted","Data":"005d67fa0a200c095eb5b5921d7bcd67f0554d73647cd131d09c8afe13332788"} Oct 06 15:17:03 crc kubenswrapper[4888]: I1006 15:17:03.558006 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 06 15:17:03 crc kubenswrapper[4888]: I1006 15:17:03.582595 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371957.2722 podStartE2EDuration="1m19.58257662s" podCreationTimestamp="2025-10-06 15:15:44 +0000 UTC" firstStartedPulling="2025-10-06 15:15:46.230223011 +0000 UTC m=+886.042573729" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:03.579956057 +0000 UTC m=+963.392306765" watchObservedRunningTime="2025-10-06 15:17:03.58257662 +0000 UTC m=+963.394927348" Oct 06 15:17:03 crc kubenswrapper[4888]: I1006 15:17:03.859572 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:17:03 crc kubenswrapper[4888]: I1006 15:17:03.867166 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/277682ba-0d72-43d5-b52c-59f6b02b2963-etc-swift\") pod \"swift-storage-0\" (UID: \"277682ba-0d72-43d5-b52c-59f6b02b2963\") " pod="openstack/swift-storage-0" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.038103 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.152531 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-28k8r"] Oct 06 15:17:04 crc kubenswrapper[4888]: E1006 15:17:04.153423 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023135e5-d2fb-4bd2-a8b7-03b214c7c81f" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.153532 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="023135e5-d2fb-4bd2-a8b7-03b214c7c81f" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: E1006 15:17:04.153628 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb82a472-d981-4b66-8101-ebf6ca21b88b" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.153699 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb82a472-d981-4b66-8101-ebf6ca21b88b" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: E1006 15:17:04.153820 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3443d75-b692-4b7f-86fb-3293f1b817c8" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.153898 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3443d75-b692-4b7f-86fb-3293f1b817c8" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: E1006 15:17:04.153966 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c542eef-229e-4e21-a37a-91d75ff12783" containerName="ovn-config" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.154036 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c542eef-229e-4e21-a37a-91d75ff12783" containerName="ovn-config" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.154306 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c542eef-229e-4e21-a37a-91d75ff12783" containerName="ovn-config" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.154394 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="023135e5-d2fb-4bd2-a8b7-03b214c7c81f" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.154464 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb82a472-d981-4b66-8101-ebf6ca21b88b" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.154531 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3443d75-b692-4b7f-86fb-3293f1b817c8" containerName="mariadb-account-create" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.191741 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.196633 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.196840 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rvq4c" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.197333 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-28k8r"] Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.270217 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-db-sync-config-data\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.270272 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmppc\" (UniqueName: \"kubernetes.io/projected/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-kube-api-access-qmppc\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.270313 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-config-data\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.270393 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-combined-ca-bundle\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.372292 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-config-data\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.372400 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-combined-ca-bundle\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.373432 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-db-sync-config-data\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.373457 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmppc\" (UniqueName: \"kubernetes.io/projected/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-kube-api-access-qmppc\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.378659 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-config-data\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.379107 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-db-sync-config-data\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.390944 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-combined-ca-bundle\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.393861 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmppc\" (UniqueName: \"kubernetes.io/projected/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-kube-api-access-qmppc\") pod \"glance-db-sync-28k8r\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.518158 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.734000 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 06 15:17:04 crc kubenswrapper[4888]: W1006 15:17:04.738125 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod277682ba_0d72_43d5_b52c_59f6b02b2963.slice/crio-28834e9dc318bbd4e5b00999fe2b518a8fa2cdbc7056fae25a40b391b048be3b WatchSource:0}: Error finding container 28834e9dc318bbd4e5b00999fe2b518a8fa2cdbc7056fae25a40b391b048be3b: Status 404 returned error can't find the container with id 28834e9dc318bbd4e5b00999fe2b518a8fa2cdbc7056fae25a40b391b048be3b Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.896254 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-28k8r"] Oct 06 15:17:04 crc kubenswrapper[4888]: W1006 15:17:04.898055 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38b6bbbb_cfd0_4a91_a830_1f1572e4f519.slice/crio-18e2647d2ea3a3424e5c386d6bfae2c8a6d0d36ca0c13136d33bcea032a58471 WatchSource:0}: Error finding container 18e2647d2ea3a3424e5c386d6bfae2c8a6d0d36ca0c13136d33bcea032a58471: Status 404 returned error can't find the container with id 18e2647d2ea3a3424e5c386d6bfae2c8a6d0d36ca0c13136d33bcea032a58471 Oct 06 15:17:04 crc kubenswrapper[4888]: I1006 15:17:04.932534 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c542eef-229e-4e21-a37a-91d75ff12783" path="/var/lib/kubelet/pods/7c542eef-229e-4e21-a37a-91d75ff12783/volumes" Oct 06 15:17:05 crc kubenswrapper[4888]: I1006 15:17:05.261025 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:17:05 crc kubenswrapper[4888]: I1006 15:17:05.574619 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"28834e9dc318bbd4e5b00999fe2b518a8fa2cdbc7056fae25a40b391b048be3b"} Oct 06 15:17:05 crc kubenswrapper[4888]: I1006 15:17:05.575882 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-28k8r" event={"ID":"38b6bbbb-cfd0-4a91-a830-1f1572e4f519","Type":"ContainerStarted","Data":"18e2647d2ea3a3424e5c386d6bfae2c8a6d0d36ca0c13136d33bcea032a58471"} Oct 06 15:17:08 crc kubenswrapper[4888]: I1006 15:17:08.611126 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"b7fe6921315d692e8e6a25917c2a6c2b833f31272b6ebe4e9f7e1e973333f2aa"} Oct 06 15:17:08 crc kubenswrapper[4888]: I1006 15:17:08.611617 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"4f705194a74af4938c16721ee1940297d5abf6aeddbc5cebc1459f70bb6d7d14"} Oct 06 15:17:09 crc kubenswrapper[4888]: I1006 15:17:09.622864 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"3238781458c647af7087d4e3771451723009c5865bd8a071bb4fc248471393df"} Oct 06 15:17:09 crc kubenswrapper[4888]: I1006 15:17:09.623549 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"7fba18b6150fe97d6de34e73f4739693baad47220ecc440dad793ed1ca89fe89"} Oct 06 15:17:15 crc kubenswrapper[4888]: I1006 15:17:15.610160 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.110488 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-nk6fs"] Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.111480 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nk6fs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.183913 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nk6fs"] Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.220468 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rq7rs"] Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.221492 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rq7rs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.236776 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rq7rs"] Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.286656 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hx5\" (UniqueName: \"kubernetes.io/projected/666fdea5-da00-46e5-b4af-4b82ca36e313-kube-api-access-v7hx5\") pod \"barbican-db-create-nk6fs\" (UID: \"666fdea5-da00-46e5-b4af-4b82ca36e313\") " pod="openstack/barbican-db-create-nk6fs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.323355 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5lfmr"] Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.324413 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5lfmr" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.343361 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5lfmr"] Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.388854 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6ljw\" (UniqueName: \"kubernetes.io/projected/89d5c344-92d9-4ade-81d3-49f4c2065516-kube-api-access-q6ljw\") pod \"cinder-db-create-rq7rs\" (UID: \"89d5c344-92d9-4ade-81d3-49f4c2065516\") " pod="openstack/cinder-db-create-rq7rs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.388950 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hx5\" (UniqueName: \"kubernetes.io/projected/666fdea5-da00-46e5-b4af-4b82ca36e313-kube-api-access-v7hx5\") pod \"barbican-db-create-nk6fs\" (UID: \"666fdea5-da00-46e5-b4af-4b82ca36e313\") " pod="openstack/barbican-db-create-nk6fs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.396873 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-745cp"] Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.397909 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.403400 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pj4xj" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.405068 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.405235 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.407116 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.423965 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hx5\" (UniqueName: \"kubernetes.io/projected/666fdea5-da00-46e5-b4af-4b82ca36e313-kube-api-access-v7hx5\") pod \"barbican-db-create-nk6fs\" (UID: \"666fdea5-da00-46e5-b4af-4b82ca36e313\") " pod="openstack/barbican-db-create-nk6fs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.432855 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nk6fs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.472399 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-745cp"] Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.490288 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvxc\" (UniqueName: \"kubernetes.io/projected/553a9a70-1882-4a91-b6a3-383e723f893a-kube-api-access-gjvxc\") pod \"neutron-db-create-5lfmr\" (UID: \"553a9a70-1882-4a91-b6a3-383e723f893a\") " pod="openstack/neutron-db-create-5lfmr" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.490337 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6ljw\" (UniqueName: \"kubernetes.io/projected/89d5c344-92d9-4ade-81d3-49f4c2065516-kube-api-access-q6ljw\") pod \"cinder-db-create-rq7rs\" (UID: \"89d5c344-92d9-4ade-81d3-49f4c2065516\") " pod="openstack/cinder-db-create-rq7rs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.540696 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6ljw\" (UniqueName: \"kubernetes.io/projected/89d5c344-92d9-4ade-81d3-49f4c2065516-kube-api-access-q6ljw\") pod \"cinder-db-create-rq7rs\" (UID: \"89d5c344-92d9-4ade-81d3-49f4c2065516\") " pod="openstack/cinder-db-create-rq7rs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.543383 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rq7rs" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.591860 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjvxc\" (UniqueName: \"kubernetes.io/projected/553a9a70-1882-4a91-b6a3-383e723f893a-kube-api-access-gjvxc\") pod \"neutron-db-create-5lfmr\" (UID: \"553a9a70-1882-4a91-b6a3-383e723f893a\") " pod="openstack/neutron-db-create-5lfmr" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.592276 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-config-data\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.592372 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tclgd\" (UniqueName: \"kubernetes.io/projected/8e8f800f-89a2-4586-9298-81a993e6f60d-kube-api-access-tclgd\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.592425 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-combined-ca-bundle\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.625560 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjvxc\" (UniqueName: \"kubernetes.io/projected/553a9a70-1882-4a91-b6a3-383e723f893a-kube-api-access-gjvxc\") pod \"neutron-db-create-5lfmr\" (UID: \"553a9a70-1882-4a91-b6a3-383e723f893a\") " pod="openstack/neutron-db-create-5lfmr" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.648678 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5lfmr" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.694242 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tclgd\" (UniqueName: \"kubernetes.io/projected/8e8f800f-89a2-4586-9298-81a993e6f60d-kube-api-access-tclgd\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.694311 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-combined-ca-bundle\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.694445 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-config-data\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.700761 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-config-data\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.703052 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-combined-ca-bundle\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.713263 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tclgd\" (UniqueName: \"kubernetes.io/projected/8e8f800f-89a2-4586-9298-81a993e6f60d-kube-api-access-tclgd\") pod \"keystone-db-sync-745cp\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:16 crc kubenswrapper[4888]: I1006 15:17:16.718550 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:19 crc kubenswrapper[4888]: I1006 15:17:19.578872 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nk6fs"] Oct 06 15:17:19 crc kubenswrapper[4888]: I1006 15:17:19.589897 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-745cp"] Oct 06 15:17:19 crc kubenswrapper[4888]: I1006 15:17:19.697982 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rq7rs"] Oct 06 15:17:19 crc kubenswrapper[4888]: I1006 15:17:19.704898 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5lfmr"] Oct 06 15:17:19 crc kubenswrapper[4888]: W1006 15:17:19.719698 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod553a9a70_1882_4a91_b6a3_383e723f893a.slice/crio-8ad8c7ccfa589b98de2d00f5f5aa06cacb397ecf5245193b6247b0efe472d68e WatchSource:0}: Error finding container 8ad8c7ccfa589b98de2d00f5f5aa06cacb397ecf5245193b6247b0efe472d68e: Status 404 returned error can't find the container with id 8ad8c7ccfa589b98de2d00f5f5aa06cacb397ecf5245193b6247b0efe472d68e Oct 06 15:17:19 crc kubenswrapper[4888]: W1006 15:17:19.721626 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89d5c344_92d9_4ade_81d3_49f4c2065516.slice/crio-0756820e62fa8f02c943d0558a75711ce9a63f0073f51cea156b713056572a3b WatchSource:0}: Error finding container 0756820e62fa8f02c943d0558a75711ce9a63f0073f51cea156b713056572a3b: Status 404 returned error can't find the container with id 0756820e62fa8f02c943d0558a75711ce9a63f0073f51cea156b713056572a3b Oct 06 15:17:19 crc kubenswrapper[4888]: I1006 15:17:19.729621 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nk6fs" event={"ID":"666fdea5-da00-46e5-b4af-4b82ca36e313","Type":"ContainerStarted","Data":"5f50507de8f3ca91313b690631a41c6c1b65d79ffab82cb3575bcf265df63468"} Oct 06 15:17:19 crc kubenswrapper[4888]: I1006 15:17:19.730489 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-745cp" event={"ID":"8e8f800f-89a2-4586-9298-81a993e6f60d","Type":"ContainerStarted","Data":"ec0888d2748008fa92fcf94876922c0188e277baefec4b72d4d1f2302afacf1a"} Oct 06 15:17:19 crc kubenswrapper[4888]: I1006 15:17:19.742089 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"12ddd93032b115947a15cf91ad9be02cb675ad2f21c31b4804fedc3b7ea04d96"} Oct 06 15:17:19 crc kubenswrapper[4888]: I1006 15:17:19.743358 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"a903597441e409794c5433e08fb0a4b236faee4a4c33b3467f09465c263ccc7d"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.761302 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"248c8fdaddfc596f062c4419450772b60194bf909aaeaad4baddc2eaf902a418"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.761811 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"f52cc2099cf0e6a23209fd6aba9af54f5b791b15a5afece733ae7e76d3af098c"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.763516 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-28k8r" event={"ID":"38b6bbbb-cfd0-4a91-a830-1f1572e4f519","Type":"ContainerStarted","Data":"d40cb091e7ed4c9067d7ea42ab8adeda978339f510268435dda27cc2c8de1d25"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.767267 4888 generic.go:334] "Generic (PLEG): container finished" podID="89d5c344-92d9-4ade-81d3-49f4c2065516" containerID="cce703a67ea4d329fe1e949a7c7513b1fc172b6f07b148ae198f821e84730bc7" exitCode=0 Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.767441 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rq7rs" event={"ID":"89d5c344-92d9-4ade-81d3-49f4c2065516","Type":"ContainerDied","Data":"cce703a67ea4d329fe1e949a7c7513b1fc172b6f07b148ae198f821e84730bc7"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.767504 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rq7rs" event={"ID":"89d5c344-92d9-4ade-81d3-49f4c2065516","Type":"ContainerStarted","Data":"0756820e62fa8f02c943d0558a75711ce9a63f0073f51cea156b713056572a3b"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.768773 4888 generic.go:334] "Generic (PLEG): container finished" podID="666fdea5-da00-46e5-b4af-4b82ca36e313" containerID="1a3810fb19d037c2da4c4ba0d360690222f91f07ad3ac2966c92f7c0c9e668de" exitCode=0 Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.769067 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nk6fs" event={"ID":"666fdea5-da00-46e5-b4af-4b82ca36e313","Type":"ContainerDied","Data":"1a3810fb19d037c2da4c4ba0d360690222f91f07ad3ac2966c92f7c0c9e668de"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.780101 4888 generic.go:334] "Generic (PLEG): container finished" podID="553a9a70-1882-4a91-b6a3-383e723f893a" containerID="899272be0e36f8519906fd7bed60c013326da79f4410c7aed52c1eddcdaa5078" exitCode=0 Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.780147 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5lfmr" event={"ID":"553a9a70-1882-4a91-b6a3-383e723f893a","Type":"ContainerDied","Data":"899272be0e36f8519906fd7bed60c013326da79f4410c7aed52c1eddcdaa5078"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.780203 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5lfmr" event={"ID":"553a9a70-1882-4a91-b6a3-383e723f893a","Type":"ContainerStarted","Data":"8ad8c7ccfa589b98de2d00f5f5aa06cacb397ecf5245193b6247b0efe472d68e"} Oct 06 15:17:20 crc kubenswrapper[4888]: I1006 15:17:20.784727 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-28k8r" podStartSLOduration=2.652293311 podStartE2EDuration="16.784678759s" podCreationTimestamp="2025-10-06 15:17:04 +0000 UTC" firstStartedPulling="2025-10-06 15:17:04.901041795 +0000 UTC m=+964.713392513" lastFinishedPulling="2025-10-06 15:17:19.033427233 +0000 UTC m=+978.845777961" observedRunningTime="2025-10-06 15:17:20.779204786 +0000 UTC m=+980.591555524" watchObservedRunningTime="2025-10-06 15:17:20.784678759 +0000 UTC m=+980.597029487" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.260022 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5lfmr" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.267041 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nk6fs" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.444343 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjvxc\" (UniqueName: \"kubernetes.io/projected/553a9a70-1882-4a91-b6a3-383e723f893a-kube-api-access-gjvxc\") pod \"553a9a70-1882-4a91-b6a3-383e723f893a\" (UID: \"553a9a70-1882-4a91-b6a3-383e723f893a\") " Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.445011 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7hx5\" (UniqueName: \"kubernetes.io/projected/666fdea5-da00-46e5-b4af-4b82ca36e313-kube-api-access-v7hx5\") pod \"666fdea5-da00-46e5-b4af-4b82ca36e313\" (UID: \"666fdea5-da00-46e5-b4af-4b82ca36e313\") " Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.450819 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666fdea5-da00-46e5-b4af-4b82ca36e313-kube-api-access-v7hx5" (OuterVolumeSpecName: "kube-api-access-v7hx5") pod "666fdea5-da00-46e5-b4af-4b82ca36e313" (UID: "666fdea5-da00-46e5-b4af-4b82ca36e313"). InnerVolumeSpecName "kube-api-access-v7hx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.453498 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553a9a70-1882-4a91-b6a3-383e723f893a-kube-api-access-gjvxc" (OuterVolumeSpecName: "kube-api-access-gjvxc") pod "553a9a70-1882-4a91-b6a3-383e723f893a" (UID: "553a9a70-1882-4a91-b6a3-383e723f893a"). InnerVolumeSpecName "kube-api-access-gjvxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.547989 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7hx5\" (UniqueName: \"kubernetes.io/projected/666fdea5-da00-46e5-b4af-4b82ca36e313-kube-api-access-v7hx5\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.548292 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjvxc\" (UniqueName: \"kubernetes.io/projected/553a9a70-1882-4a91-b6a3-383e723f893a-kube-api-access-gjvxc\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.564241 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rq7rs" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.751150 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6ljw\" (UniqueName: \"kubernetes.io/projected/89d5c344-92d9-4ade-81d3-49f4c2065516-kube-api-access-q6ljw\") pod \"89d5c344-92d9-4ade-81d3-49f4c2065516\" (UID: \"89d5c344-92d9-4ade-81d3-49f4c2065516\") " Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.758016 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d5c344-92d9-4ade-81d3-49f4c2065516-kube-api-access-q6ljw" (OuterVolumeSpecName: "kube-api-access-q6ljw") pod "89d5c344-92d9-4ade-81d3-49f4c2065516" (UID: "89d5c344-92d9-4ade-81d3-49f4c2065516"). InnerVolumeSpecName "kube-api-access-q6ljw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.841841 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"ef90080b2663e32e1198d5c48473d4a7bb31fa8d2eb1e2f5df62fc36a5792112"} Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.841893 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"9349b469c7a791f6a2ed0b0dc0377d957aabb8b6ddbfb160d2ba01bfc41f3934"} Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.844112 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rq7rs" event={"ID":"89d5c344-92d9-4ade-81d3-49f4c2065516","Type":"ContainerDied","Data":"0756820e62fa8f02c943d0558a75711ce9a63f0073f51cea156b713056572a3b"} Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.844168 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0756820e62fa8f02c943d0558a75711ce9a63f0073f51cea156b713056572a3b" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.844128 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rq7rs" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.846987 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nk6fs" event={"ID":"666fdea5-da00-46e5-b4af-4b82ca36e313","Type":"ContainerDied","Data":"5f50507de8f3ca91313b690631a41c6c1b65d79ffab82cb3575bcf265df63468"} Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.847032 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f50507de8f3ca91313b690631a41c6c1b65d79ffab82cb3575bcf265df63468" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.847130 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nk6fs" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.853047 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6ljw\" (UniqueName: \"kubernetes.io/projected/89d5c344-92d9-4ade-81d3-49f4c2065516-kube-api-access-q6ljw\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.853449 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5lfmr" event={"ID":"553a9a70-1882-4a91-b6a3-383e723f893a","Type":"ContainerDied","Data":"8ad8c7ccfa589b98de2d00f5f5aa06cacb397ecf5245193b6247b0efe472d68e"} Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.853497 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ad8c7ccfa589b98de2d00f5f5aa06cacb397ecf5245193b6247b0efe472d68e" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.853589 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5lfmr" Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.869245 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-745cp" event={"ID":"8e8f800f-89a2-4586-9298-81a993e6f60d","Type":"ContainerStarted","Data":"0f5545a4e8a44e3a5bb9b396c3a93489ed81320867af53589aba298feb307ccf"} Oct 06 15:17:25 crc kubenswrapper[4888]: I1006 15:17:25.888913 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-745cp" podStartSLOduration=3.948721071 podStartE2EDuration="9.888895746s" podCreationTimestamp="2025-10-06 15:17:16 +0000 UTC" firstStartedPulling="2025-10-06 15:17:19.598102608 +0000 UTC m=+979.410453326" lastFinishedPulling="2025-10-06 15:17:25.538277293 +0000 UTC m=+985.350628001" observedRunningTime="2025-10-06 15:17:25.88682443 +0000 UTC m=+985.699175148" watchObservedRunningTime="2025-10-06 15:17:25.888895746 +0000 UTC m=+985.701246474" Oct 06 15:17:26 crc kubenswrapper[4888]: I1006 15:17:26.882408 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"1b4eea4cd7abd9284e73c9f035c8438cf63194ed0cb4491d3b61e6650731ca84"} Oct 06 15:17:26 crc kubenswrapper[4888]: I1006 15:17:26.882706 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"df549a529c808ffbd37639edf0c220790e5439c9df01a63d8d57bcbcb0f13e8d"} Oct 06 15:17:26 crc kubenswrapper[4888]: I1006 15:17:26.882718 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"f1f2159c0d73e55f93a6eea87af3be0801fa912a81ad57ddbb508f28f2eec363"} Oct 06 15:17:26 crc kubenswrapper[4888]: I1006 15:17:26.882726 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"be4f673e14f12dab4bdd157972ba283780d5e1a39b1c17bf68abac8346e7fc74"} Oct 06 15:17:26 crc kubenswrapper[4888]: I1006 15:17:26.882734 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"277682ba-0d72-43d5-b52c-59f6b02b2963","Type":"ContainerStarted","Data":"5c135bc9dc6f805a119cc54325caf9651a855e2446af9964f7f9436d198ce2cc"} Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.211332 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.418899789 podStartE2EDuration="57.211309074s" podCreationTimestamp="2025-10-06 15:16:30 +0000 UTC" firstStartedPulling="2025-10-06 15:17:04.74120284 +0000 UTC m=+964.553553558" lastFinishedPulling="2025-10-06 15:17:25.533612125 +0000 UTC m=+985.345962843" observedRunningTime="2025-10-06 15:17:26.933561596 +0000 UTC m=+986.745912314" watchObservedRunningTime="2025-10-06 15:17:27.211309074 +0000 UTC m=+987.023659792" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.215269 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qbljc"] Oct 06 15:17:27 crc kubenswrapper[4888]: E1006 15:17:27.215675 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d5c344-92d9-4ade-81d3-49f4c2065516" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.215694 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d5c344-92d9-4ade-81d3-49f4c2065516" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: E1006 15:17:27.215717 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666fdea5-da00-46e5-b4af-4b82ca36e313" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.215726 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="666fdea5-da00-46e5-b4af-4b82ca36e313" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: E1006 15:17:27.215746 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553a9a70-1882-4a91-b6a3-383e723f893a" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.215754 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="553a9a70-1882-4a91-b6a3-383e723f893a" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.215926 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d5c344-92d9-4ade-81d3-49f4c2065516" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.215950 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="553a9a70-1882-4a91-b6a3-383e723f893a" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.215965 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="666fdea5-da00-46e5-b4af-4b82ca36e313" containerName="mariadb-database-create" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.216998 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.219313 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.231380 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qbljc"] Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.378610 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.378676 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.378831 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.378887 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5zx2\" (UniqueName: \"kubernetes.io/projected/8c95a401-fa74-4bec-84cc-5ec1954d7f68-kube-api-access-l5zx2\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.378922 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.379075 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-config\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.480596 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.480665 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.480701 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.480720 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5zx2\" (UniqueName: \"kubernetes.io/projected/8c95a401-fa74-4bec-84cc-5ec1954d7f68-kube-api-access-l5zx2\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.480740 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.480779 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-config\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.481510 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.481542 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.481594 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.481808 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-config\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.481861 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.511975 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5zx2\" (UniqueName: \"kubernetes.io/projected/8c95a401-fa74-4bec-84cc-5ec1954d7f68-kube-api-access-l5zx2\") pod \"dnsmasq-dns-77585f5f8c-qbljc\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:27 crc kubenswrapper[4888]: I1006 15:17:27.544165 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:28 crc kubenswrapper[4888]: I1006 15:17:28.181355 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qbljc"] Oct 06 15:17:28 crc kubenswrapper[4888]: W1006 15:17:28.197761 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c95a401_fa74_4bec_84cc_5ec1954d7f68.slice/crio-a50a15756564bf6a0531a277c5c700e46647945bc7a64a34fe286ac164f35d79 WatchSource:0}: Error finding container a50a15756564bf6a0531a277c5c700e46647945bc7a64a34fe286ac164f35d79: Status 404 returned error can't find the container with id a50a15756564bf6a0531a277c5c700e46647945bc7a64a34fe286ac164f35d79 Oct 06 15:17:28 crc kubenswrapper[4888]: I1006 15:17:28.910965 4888 generic.go:334] "Generic (PLEG): container finished" podID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" containerID="adfece0fd37977ee607a30af26fe4fc552c76745622bea167032c4f3d53f943f" exitCode=0 Oct 06 15:17:28 crc kubenswrapper[4888]: I1006 15:17:28.911015 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" event={"ID":"8c95a401-fa74-4bec-84cc-5ec1954d7f68","Type":"ContainerDied","Data":"adfece0fd37977ee607a30af26fe4fc552c76745622bea167032c4f3d53f943f"} Oct 06 15:17:28 crc kubenswrapper[4888]: I1006 15:17:28.911049 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" event={"ID":"8c95a401-fa74-4bec-84cc-5ec1954d7f68","Type":"ContainerStarted","Data":"a50a15756564bf6a0531a277c5c700e46647945bc7a64a34fe286ac164f35d79"} Oct 06 15:17:29 crc kubenswrapper[4888]: I1006 15:17:29.920160 4888 generic.go:334] "Generic (PLEG): container finished" podID="8e8f800f-89a2-4586-9298-81a993e6f60d" containerID="0f5545a4e8a44e3a5bb9b396c3a93489ed81320867af53589aba298feb307ccf" exitCode=0 Oct 06 15:17:29 crc kubenswrapper[4888]: I1006 15:17:29.920327 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-745cp" event={"ID":"8e8f800f-89a2-4586-9298-81a993e6f60d","Type":"ContainerDied","Data":"0f5545a4e8a44e3a5bb9b396c3a93489ed81320867af53589aba298feb307ccf"} Oct 06 15:17:29 crc kubenswrapper[4888]: I1006 15:17:29.924963 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" event={"ID":"8c95a401-fa74-4bec-84cc-5ec1954d7f68","Type":"ContainerStarted","Data":"ca5147a9cad9caa4023310aefb339f08fa9b00c83b3924d979d9533958daf3db"} Oct 06 15:17:29 crc kubenswrapper[4888]: I1006 15:17:29.925001 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.218733 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.239638 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" podStartSLOduration=4.239617706 podStartE2EDuration="4.239617706s" podCreationTimestamp="2025-10-06 15:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:29.969579146 +0000 UTC m=+989.781929864" watchObservedRunningTime="2025-10-06 15:17:31.239617706 +0000 UTC m=+991.051968424" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.345489 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-combined-ca-bundle\") pod \"8e8f800f-89a2-4586-9298-81a993e6f60d\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.345546 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tclgd\" (UniqueName: \"kubernetes.io/projected/8e8f800f-89a2-4586-9298-81a993e6f60d-kube-api-access-tclgd\") pod \"8e8f800f-89a2-4586-9298-81a993e6f60d\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.345641 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-config-data\") pod \"8e8f800f-89a2-4586-9298-81a993e6f60d\" (UID: \"8e8f800f-89a2-4586-9298-81a993e6f60d\") " Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.350752 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8f800f-89a2-4586-9298-81a993e6f60d-kube-api-access-tclgd" (OuterVolumeSpecName: "kube-api-access-tclgd") pod "8e8f800f-89a2-4586-9298-81a993e6f60d" (UID: "8e8f800f-89a2-4586-9298-81a993e6f60d"). InnerVolumeSpecName "kube-api-access-tclgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.374610 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e8f800f-89a2-4586-9298-81a993e6f60d" (UID: "8e8f800f-89a2-4586-9298-81a993e6f60d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.387462 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-config-data" (OuterVolumeSpecName: "config-data") pod "8e8f800f-89a2-4586-9298-81a993e6f60d" (UID: "8e8f800f-89a2-4586-9298-81a993e6f60d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.447063 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.447316 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e8f800f-89a2-4586-9298-81a993e6f60d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.447426 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tclgd\" (UniqueName: \"kubernetes.io/projected/8e8f800f-89a2-4586-9298-81a993e6f60d-kube-api-access-tclgd\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.942563 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-745cp" event={"ID":"8e8f800f-89a2-4586-9298-81a993e6f60d","Type":"ContainerDied","Data":"ec0888d2748008fa92fcf94876922c0188e277baefec4b72d4d1f2302afacf1a"} Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.942609 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec0888d2748008fa92fcf94876922c0188e277baefec4b72d4d1f2302afacf1a" Oct 06 15:17:31 crc kubenswrapper[4888]: I1006 15:17:31.943146 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-745cp" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.246481 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qbljc"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.247101 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" podUID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" containerName="dnsmasq-dns" containerID="cri-o://ca5147a9cad9caa4023310aefb339f08fa9b00c83b3924d979d9533958daf3db" gracePeriod=10 Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.273833 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h9p29"] Oct 06 15:17:32 crc kubenswrapper[4888]: E1006 15:17:32.274235 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8f800f-89a2-4586-9298-81a993e6f60d" containerName="keystone-db-sync" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.274260 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8f800f-89a2-4586-9298-81a993e6f60d" containerName="keystone-db-sync" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.274462 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8f800f-89a2-4586-9298-81a993e6f60d" containerName="keystone-db-sync" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.275165 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.278976 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.280494 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.289494 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pj4xj" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.291000 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h9p29"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.299278 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.313045 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ppwsl"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.314338 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.376215 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-credential-keys\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.376286 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-fernet-keys\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.376334 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-scripts\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.376406 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-config-data\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.376467 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhxlg\" (UniqueName: \"kubernetes.io/projected/35407e40-4640-4155-9fef-46cea3f2dee8-kube-api-access-dhxlg\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.376528 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-combined-ca-bundle\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.451747 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ppwsl"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.477989 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-svc\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478063 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478106 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhxlg\" (UniqueName: \"kubernetes.io/projected/35407e40-4640-4155-9fef-46cea3f2dee8-kube-api-access-dhxlg\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478130 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478176 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-combined-ca-bundle\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478222 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-config\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478284 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-credential-keys\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478347 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-fernet-keys\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478388 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-scripts\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478418 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478444 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgx7p\" (UniqueName: \"kubernetes.io/projected/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-kube-api-access-dgx7p\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.478497 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-config-data\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.490691 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-credential-keys\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.490756 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-combined-ca-bundle\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.491884 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-config-data\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.493171 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-scripts\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.501035 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-fernet-keys\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.506316 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhxlg\" (UniqueName: \"kubernetes.io/projected/35407e40-4640-4155-9fef-46cea3f2dee8-kube-api-access-dhxlg\") pod \"keystone-bootstrap-h9p29\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.574816 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.576905 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.583384 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.583636 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.584677 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-config\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.584837 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.584867 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgx7p\" (UniqueName: \"kubernetes.io/projected/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-kube-api-access-dgx7p\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.584932 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-svc\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.584959 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.584992 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.589264 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-config\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.589829 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.590348 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-svc\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.590860 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.593407 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.595297 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.599218 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.641666 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgx7p\" (UniqueName: \"kubernetes.io/projected/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-kube-api-access-dgx7p\") pod \"dnsmasq-dns-55fff446b9-ppwsl\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.648502 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.689850 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-run-httpd\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.689915 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-scripts\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.689935 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-log-httpd\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.689967 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmgvk\" (UniqueName: \"kubernetes.io/projected/2ceb4186-79b8-4dc6-b54c-7e0681764d35-kube-api-access-rmgvk\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.690084 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-config-data\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.690195 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.690240 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.696277 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67678875c7-wxfg8"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.697978 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.705394 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.705627 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.705742 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-c57c2" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.705754 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.791944 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-run-httpd\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792261 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a088206b-bb6d-455d-b223-689888a75b1c-horizon-secret-key\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792298 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-scripts\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792319 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-log-httpd\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792346 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmgvk\" (UniqueName: \"kubernetes.io/projected/2ceb4186-79b8-4dc6-b54c-7e0681764d35-kube-api-access-rmgvk\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792384 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-config-data\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792430 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792457 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a088206b-bb6d-455d-b223-689888a75b1c-logs\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792488 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792530 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-scripts\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792566 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8db\" (UniqueName: \"kubernetes.io/projected/a088206b-bb6d-455d-b223-689888a75b1c-kube-api-access-fs8db\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.792588 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-config-data\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.793470 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-run-httpd\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.794356 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-log-httpd\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.798652 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-scripts\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.806993 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.808363 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.813068 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-config-data\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.815341 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ppwsl"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.849861 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67678875c7-wxfg8"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.854944 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmgvk\" (UniqueName: \"kubernetes.io/projected/2ceb4186-79b8-4dc6-b54c-7e0681764d35-kube-api-access-rmgvk\") pod \"ceilometer-0\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " pod="openstack/ceilometer-0" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.856179 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7dd84dc557-97qj7"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.857553 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.893791 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-scripts\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.893887 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs8db\" (UniqueName: \"kubernetes.io/projected/a088206b-bb6d-455d-b223-689888a75b1c-kube-api-access-fs8db\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.893934 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-config-data\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.893980 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a088206b-bb6d-455d-b223-689888a75b1c-horizon-secret-key\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.894057 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a088206b-bb6d-455d-b223-689888a75b1c-logs\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.894483 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a088206b-bb6d-455d-b223-689888a75b1c-logs\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.895004 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-scripts\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.898256 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-config-data\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.912881 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dd84dc557-97qj7"] Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.914662 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a088206b-bb6d-455d-b223-689888a75b1c-horizon-secret-key\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:32 crc kubenswrapper[4888]: I1006 15:17:32.934104 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.003538 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-logs\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.003625 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-config-data\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.003653 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-horizon-secret-key\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.003674 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-scripts\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.003911 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljtx5\" (UniqueName: \"kubernetes.io/projected/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-kube-api-access-ljtx5\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.040316 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs8db\" (UniqueName: \"kubernetes.io/projected/a088206b-bb6d-455d-b223-689888a75b1c-kube-api-access-fs8db\") pod \"horizon-67678875c7-wxfg8\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.062208 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.076411 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-mk69j"] Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.090617 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5rgjm"] Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.091905 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-mk69j"] Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.091943 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5rgjm"] Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.092033 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.092607 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.098556 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.098812 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.099151 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dtkhf" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.107269 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljtx5\" (UniqueName: \"kubernetes.io/projected/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-kube-api-access-ljtx5\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.107358 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-logs\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.107454 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-config-data\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.107488 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-horizon-secret-key\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.107510 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-scripts\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.108397 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-logs\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.110394 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-scripts\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.113305 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-config-data\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.160863 4888 generic.go:334] "Generic (PLEG): container finished" podID="38b6bbbb-cfd0-4a91-a830-1f1572e4f519" containerID="d40cb091e7ed4c9067d7ea42ab8adeda978339f510268435dda27cc2c8de1d25" exitCode=0 Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.160973 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-28k8r" event={"ID":"38b6bbbb-cfd0-4a91-a830-1f1572e4f519","Type":"ContainerDied","Data":"d40cb091e7ed4c9067d7ea42ab8adeda978339f510268435dda27cc2c8de1d25"} Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.168420 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-horizon-secret-key\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.187892 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljtx5\" (UniqueName: \"kubernetes.io/projected/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-kube-api-access-ljtx5\") pod \"horizon-7dd84dc557-97qj7\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210206 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210295 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nhhv\" (UniqueName: \"kubernetes.io/projected/840197d6-f6a9-4bfc-9e0b-74328e475532-kube-api-access-7nhhv\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210329 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210385 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210478 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-combined-ca-bundle\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210574 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-config-data\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210603 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvxcq\" (UniqueName: \"kubernetes.io/projected/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-kube-api-access-jvxcq\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210662 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-config\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210677 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210739 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-scripts\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.210789 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/840197d6-f6a9-4bfc-9e0b-74328e475532-logs\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.215910 4888 generic.go:334] "Generic (PLEG): container finished" podID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" containerID="ca5147a9cad9caa4023310aefb339f08fa9b00c83b3924d979d9533958daf3db" exitCode=0 Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.215949 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" event={"ID":"8c95a401-fa74-4bec-84cc-5ec1954d7f68","Type":"ContainerDied","Data":"ca5147a9cad9caa4023310aefb339f08fa9b00c83b3924d979d9533958daf3db"} Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.277369 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311667 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311742 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-combined-ca-bundle\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311763 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-config-data\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311785 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvxcq\" (UniqueName: \"kubernetes.io/projected/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-kube-api-access-jvxcq\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311828 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-config\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311844 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311876 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-scripts\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311901 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/840197d6-f6a9-4bfc-9e0b-74328e475532-logs\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311919 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311974 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhhv\" (UniqueName: \"kubernetes.io/projected/840197d6-f6a9-4bfc-9e0b-74328e475532-kube-api-access-7nhhv\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.311999 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.314373 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.315044 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.316691 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-config-data\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.317490 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-config\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.318570 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.319343 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.320143 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/840197d6-f6a9-4bfc-9e0b-74328e475532-logs\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.327709 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-scripts\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.335578 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-combined-ca-bundle\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.346041 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nhhv\" (UniqueName: \"kubernetes.io/projected/840197d6-f6a9-4bfc-9e0b-74328e475532-kube-api-access-7nhhv\") pod \"placement-db-sync-5rgjm\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.357281 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvxcq\" (UniqueName: \"kubernetes.io/projected/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-kube-api-access-jvxcq\") pod \"dnsmasq-dns-76fcf4b695-mk69j\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.445755 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5rgjm" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.489764 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.495134 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.635010 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-swift-storage-0\") pod \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.635196 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-nb\") pod \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.641507 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-config\") pod \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.641548 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-svc\") pod \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.641570 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-sb\") pod \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.641691 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5zx2\" (UniqueName: \"kubernetes.io/projected/8c95a401-fa74-4bec-84cc-5ec1954d7f68-kube-api-access-l5zx2\") pod \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\" (UID: \"8c95a401-fa74-4bec-84cc-5ec1954d7f68\") " Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.652699 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c95a401-fa74-4bec-84cc-5ec1954d7f68-kube-api-access-l5zx2" (OuterVolumeSpecName: "kube-api-access-l5zx2") pod "8c95a401-fa74-4bec-84cc-5ec1954d7f68" (UID: "8c95a401-fa74-4bec-84cc-5ec1954d7f68"). InnerVolumeSpecName "kube-api-access-l5zx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.687285 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ppwsl"] Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.715263 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c95a401-fa74-4bec-84cc-5ec1954d7f68" (UID: "8c95a401-fa74-4bec-84cc-5ec1954d7f68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:33 crc kubenswrapper[4888]: W1006 15:17:33.725534 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e6d1c77_cc0d_4bd7_a5cc_f82d44e7963e.slice/crio-39b6e57c92f706bdddac0bdb9412b23fa74524f6c16bb034ba7699d558caf0d5 WatchSource:0}: Error finding container 39b6e57c92f706bdddac0bdb9412b23fa74524f6c16bb034ba7699d558caf0d5: Status 404 returned error can't find the container with id 39b6e57c92f706bdddac0bdb9412b23fa74524f6c16bb034ba7699d558caf0d5 Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.749151 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5zx2\" (UniqueName: \"kubernetes.io/projected/8c95a401-fa74-4bec-84cc-5ec1954d7f68-kube-api-access-l5zx2\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.749189 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.767700 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c95a401-fa74-4bec-84cc-5ec1954d7f68" (UID: "8c95a401-fa74-4bec-84cc-5ec1954d7f68"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.771231 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h9p29"] Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.811901 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c95a401-fa74-4bec-84cc-5ec1954d7f68" (UID: "8c95a401-fa74-4bec-84cc-5ec1954d7f68"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.819184 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c95a401-fa74-4bec-84cc-5ec1954d7f68" (UID: "8c95a401-fa74-4bec-84cc-5ec1954d7f68"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.850732 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.850761 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.850770 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.859629 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-config" (OuterVolumeSpecName: "config") pod "8c95a401-fa74-4bec-84cc-5ec1954d7f68" (UID: "8c95a401-fa74-4bec-84cc-5ec1954d7f68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.955662 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c95a401-fa74-4bec-84cc-5ec1954d7f68-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:33 crc kubenswrapper[4888]: I1006 15:17:33.974052 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:17:33 crc kubenswrapper[4888]: W1006 15:17:33.999400 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ceb4186_79b8_4dc6_b54c_7e0681764d35.slice/crio-c7c088164569f8789e4944c0a0ba70268ceacca5749c6aa18009b7d23801e3ac WatchSource:0}: Error finding container c7c088164569f8789e4944c0a0ba70268ceacca5749c6aa18009b7d23801e3ac: Status 404 returned error can't find the container with id c7c088164569f8789e4944c0a0ba70268ceacca5749c6aa18009b7d23801e3ac Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.211084 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67678875c7-wxfg8"] Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.281417 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7dd84dc557-97qj7"] Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.284942 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ceb4186-79b8-4dc6-b54c-7e0681764d35","Type":"ContainerStarted","Data":"c7c088164569f8789e4944c0a0ba70268ceacca5749c6aa18009b7d23801e3ac"} Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.295179 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" event={"ID":"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e","Type":"ContainerStarted","Data":"39b6e57c92f706bdddac0bdb9412b23fa74524f6c16bb034ba7699d558caf0d5"} Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.304037 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" event={"ID":"8c95a401-fa74-4bec-84cc-5ec1954d7f68","Type":"ContainerDied","Data":"a50a15756564bf6a0531a277c5c700e46647945bc7a64a34fe286ac164f35d79"} Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.304106 4888 scope.go:117] "RemoveContainer" containerID="ca5147a9cad9caa4023310aefb339f08fa9b00c83b3924d979d9533958daf3db" Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.304278 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-qbljc" Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.321175 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9p29" event={"ID":"35407e40-4640-4155-9fef-46cea3f2dee8","Type":"ContainerStarted","Data":"a122a10737b9e58acaf39e364356c66f1f2c25550597d5de59694a06341297ad"} Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.338175 4888 scope.go:117] "RemoveContainer" containerID="adfece0fd37977ee607a30af26fe4fc552c76745622bea167032c4f3d53f943f" Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.367253 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h9p29" podStartSLOduration=2.36723556 podStartE2EDuration="2.36723556s" podCreationTimestamp="2025-10-06 15:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:34.36629641 +0000 UTC m=+994.178647128" watchObservedRunningTime="2025-10-06 15:17:34.36723556 +0000 UTC m=+994.179586278" Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.396713 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qbljc"] Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.399497 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-qbljc"] Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.529870 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-mk69j"] Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.555192 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5rgjm"] Oct 06 15:17:34 crc kubenswrapper[4888]: W1006 15:17:34.571155 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod840197d6_f6a9_4bfc_9e0b_74328e475532.slice/crio-0465f400fdd39c97546318dd578fffdf79700f02bf21c679f518283d0e748963 WatchSource:0}: Error finding container 0465f400fdd39c97546318dd578fffdf79700f02bf21c679f518283d0e748963: Status 404 returned error can't find the container with id 0465f400fdd39c97546318dd578fffdf79700f02bf21c679f518283d0e748963 Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.941559 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:34 crc kubenswrapper[4888]: I1006 15:17:34.948060 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" path="/var/lib/kubelet/pods/8c95a401-fa74-4bec-84cc-5ec1954d7f68/volumes" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.050629 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-db-sync-config-data\") pod \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.051893 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmppc\" (UniqueName: \"kubernetes.io/projected/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-kube-api-access-qmppc\") pod \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.051979 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-config-data\") pod \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.052017 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-combined-ca-bundle\") pod \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\" (UID: \"38b6bbbb-cfd0-4a91-a830-1f1572e4f519\") " Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.062251 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "38b6bbbb-cfd0-4a91-a830-1f1572e4f519" (UID: "38b6bbbb-cfd0-4a91-a830-1f1572e4f519"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.071189 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-kube-api-access-qmppc" (OuterVolumeSpecName: "kube-api-access-qmppc") pod "38b6bbbb-cfd0-4a91-a830-1f1572e4f519" (UID: "38b6bbbb-cfd0-4a91-a830-1f1572e4f519"). InnerVolumeSpecName "kube-api-access-qmppc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.140122 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38b6bbbb-cfd0-4a91-a830-1f1572e4f519" (UID: "38b6bbbb-cfd0-4a91-a830-1f1572e4f519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.159092 4888 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.159133 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmppc\" (UniqueName: \"kubernetes.io/projected/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-kube-api-access-qmppc\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.159147 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.168909 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-config-data" (OuterVolumeSpecName: "config-data") pod "38b6bbbb-cfd0-4a91-a830-1f1572e4f519" (UID: "38b6bbbb-cfd0-4a91-a830-1f1572e4f519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.260280 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b6bbbb-cfd0-4a91-a830-1f1572e4f519-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.353399 4888 generic.go:334] "Generic (PLEG): container finished" podID="4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" containerID="a040faa220070d8ee26973b74a8d45c7ac2348c17a65c0d0bdfe2d922ffefc3a" exitCode=0 Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.353487 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" event={"ID":"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e","Type":"ContainerDied","Data":"a040faa220070d8ee26973b74a8d45c7ac2348c17a65c0d0bdfe2d922ffefc3a"} Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.369981 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-28k8r" event={"ID":"38b6bbbb-cfd0-4a91-a830-1f1572e4f519","Type":"ContainerDied","Data":"18e2647d2ea3a3424e5c386d6bfae2c8a6d0d36ca0c13136d33bcea032a58471"} Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.370019 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e2647d2ea3a3424e5c386d6bfae2c8a6d0d36ca0c13136d33bcea032a58471" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.370088 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-28k8r" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.440032 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67678875c7-wxfg8" event={"ID":"a088206b-bb6d-455d-b223-689888a75b1c","Type":"ContainerStarted","Data":"35c2019ceb11a325bea0d279b64a953776d7445a5cad5fd7c401e2a2e05e922f"} Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.480051 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd84dc557-97qj7" event={"ID":"2fc1c76f-dc92-49a3-a5fa-9537a814eb82","Type":"ContainerStarted","Data":"48b66a5c1f11f5dbeacbaff65b1365991c856a0663c7f4df633c2677fed23a36"} Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.499277 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9p29" event={"ID":"35407e40-4640-4155-9fef-46cea3f2dee8","Type":"ContainerStarted","Data":"3cfed561717b9a8b3c32b4bb051f5ecf3a5573d04bed4e8747ed7f2521bf7419"} Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.528913 4888 generic.go:334] "Generic (PLEG): container finished" podID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" containerID="0510b47276d37295fb938916f75af7b837f53aa360c9b4bc9b293b39c6b90a7e" exitCode=0 Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.528989 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" event={"ID":"1a1f4d22-5be7-4de4-81de-dc0eaeebec77","Type":"ContainerDied","Data":"0510b47276d37295fb938916f75af7b837f53aa360c9b4bc9b293b39c6b90a7e"} Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.529303 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" event={"ID":"1a1f4d22-5be7-4de4-81de-dc0eaeebec77","Type":"ContainerStarted","Data":"557334144a0883ad73263c132645dc9127db6dd66fe33b911199a4f9b7ce955a"} Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.546258 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5rgjm" event={"ID":"840197d6-f6a9-4bfc-9e0b-74328e475532","Type":"ContainerStarted","Data":"0465f400fdd39c97546318dd578fffdf79700f02bf21c679f518283d0e748963"} Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.726101 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67678875c7-wxfg8"] Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.821076 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8547c88cc-gnkgd"] Oct 06 15:17:35 crc kubenswrapper[4888]: E1006 15:17:35.821480 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b6bbbb-cfd0-4a91-a830-1f1572e4f519" containerName="glance-db-sync" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.821495 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b6bbbb-cfd0-4a91-a830-1f1572e4f519" containerName="glance-db-sync" Oct 06 15:17:35 crc kubenswrapper[4888]: E1006 15:17:35.821520 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" containerName="init" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.821528 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" containerName="init" Oct 06 15:17:35 crc kubenswrapper[4888]: E1006 15:17:35.821542 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" containerName="dnsmasq-dns" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.821548 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" containerName="dnsmasq-dns" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.821696 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c95a401-fa74-4bec-84cc-5ec1954d7f68" containerName="dnsmasq-dns" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.821713 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b6bbbb-cfd0-4a91-a830-1f1572e4f519" containerName="glance-db-sync" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.822826 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.847713 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8547c88cc-gnkgd"] Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.877732 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bhg2\" (UniqueName: \"kubernetes.io/projected/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-kube-api-access-2bhg2\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.877828 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-logs\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.878142 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-horizon-secret-key\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.878350 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-config-data\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.878385 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-scripts\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.889541 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:17:35 crc kubenswrapper[4888]: I1006 15:17:35.940106 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-mk69j"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.008731 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bhg2\" (UniqueName: \"kubernetes.io/projected/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-kube-api-access-2bhg2\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.008833 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-logs\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.009193 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-horizon-secret-key\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.009490 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-config-data\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.009535 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-scripts\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.018281 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-logs\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.019474 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-scripts\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.030903 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8kzqg"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.032726 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-config-data\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.066119 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-horizon-secret-key\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.069749 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.084408 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bhg2\" (UniqueName: \"kubernetes.io/projected/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-kube-api-access-2bhg2\") pod \"horizon-8547c88cc-gnkgd\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.108764 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8kzqg"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.163680 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.163892 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.163988 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jqp\" (UniqueName: \"kubernetes.io/projected/4a4b9805-b00b-4f77-9df2-4e93a713e673-kube-api-access-45jqp\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.164018 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.164109 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-config\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.164138 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.195942 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.216047 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e12a-account-create-d7cmc"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.217215 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e12a-account-create-d7cmc" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.224997 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.266947 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-config\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.266996 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.267079 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.267146 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.267192 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jqp\" (UniqueName: \"kubernetes.io/projected/4a4b9805-b00b-4f77-9df2-4e93a713e673-kube-api-access-45jqp\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.267212 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.272728 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-config\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.273414 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.274089 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.274716 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.279985 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.315370 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e12a-account-create-d7cmc"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.350760 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jqp\" (UniqueName: \"kubernetes.io/projected/4a4b9805-b00b-4f77-9df2-4e93a713e673-kube-api-access-45jqp\") pod \"dnsmasq-dns-8b5c85b87-8kzqg\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.372991 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcbg\" (UniqueName: \"kubernetes.io/projected/c98f487b-c9a8-402c-9595-d5bc0e7c66fa-kube-api-access-xxcbg\") pod \"cinder-e12a-account-create-d7cmc\" (UID: \"c98f487b-c9a8-402c-9595-d5bc0e7c66fa\") " pod="openstack/cinder-e12a-account-create-d7cmc" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.397860 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c32f-account-create-g69zm"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.399152 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c32f-account-create-g69zm" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.402369 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.414211 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c32f-account-create-g69zm"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.475044 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gzzx\" (UniqueName: \"kubernetes.io/projected/6ada5181-629e-4efb-97e8-e21d5d601d09-kube-api-access-4gzzx\") pod \"barbican-c32f-account-create-g69zm\" (UID: \"6ada5181-629e-4efb-97e8-e21d5d601d09\") " pod="openstack/barbican-c32f-account-create-g69zm" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.475151 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcbg\" (UniqueName: \"kubernetes.io/projected/c98f487b-c9a8-402c-9595-d5bc0e7c66fa-kube-api-access-xxcbg\") pod \"cinder-e12a-account-create-d7cmc\" (UID: \"c98f487b-c9a8-402c-9595-d5bc0e7c66fa\") " pod="openstack/cinder-e12a-account-create-d7cmc" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.492193 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e5a4-account-create-hf6jn"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.494171 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.495138 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5a4-account-create-hf6jn" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.500303 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.532876 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e5a4-account-create-hf6jn"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.547726 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcbg\" (UniqueName: \"kubernetes.io/projected/c98f487b-c9a8-402c-9595-d5bc0e7c66fa-kube-api-access-xxcbg\") pod \"cinder-e12a-account-create-d7cmc\" (UID: \"c98f487b-c9a8-402c-9595-d5bc0e7c66fa\") " pod="openstack/cinder-e12a-account-create-d7cmc" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.563259 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e12a-account-create-d7cmc" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.577411 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gzzx\" (UniqueName: \"kubernetes.io/projected/6ada5181-629e-4efb-97e8-e21d5d601d09-kube-api-access-4gzzx\") pod \"barbican-c32f-account-create-g69zm\" (UID: \"6ada5181-629e-4efb-97e8-e21d5d601d09\") " pod="openstack/barbican-c32f-account-create-g69zm" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.577483 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9k89\" (UniqueName: \"kubernetes.io/projected/33e3ab89-3ae3-42ef-b94c-fdc2c205e105-kube-api-access-x9k89\") pod \"neutron-e5a4-account-create-hf6jn\" (UID: \"33e3ab89-3ae3-42ef-b94c-fdc2c205e105\") " pod="openstack/neutron-e5a4-account-create-hf6jn" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.604419 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" podUID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" containerName="dnsmasq-dns" containerID="cri-o://5508e4a3da24bdc6dae042d22ae24baf284fc904142d86da144d11b68c5a62d4" gracePeriod=10 Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.604708 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" event={"ID":"1a1f4d22-5be7-4de4-81de-dc0eaeebec77","Type":"ContainerStarted","Data":"5508e4a3da24bdc6dae042d22ae24baf284fc904142d86da144d11b68c5a62d4"} Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.604753 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.630391 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.659902 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gzzx\" (UniqueName: \"kubernetes.io/projected/6ada5181-629e-4efb-97e8-e21d5d601d09-kube-api-access-4gzzx\") pod \"barbican-c32f-account-create-g69zm\" (UID: \"6ada5181-629e-4efb-97e8-e21d5d601d09\") " pod="openstack/barbican-c32f-account-create-g69zm" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.679238 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9k89\" (UniqueName: \"kubernetes.io/projected/33e3ab89-3ae3-42ef-b94c-fdc2c205e105-kube-api-access-x9k89\") pod \"neutron-e5a4-account-create-hf6jn\" (UID: \"33e3ab89-3ae3-42ef-b94c-fdc2c205e105\") " pod="openstack/neutron-e5a4-account-create-hf6jn" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.679547 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.690530 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.690750 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.690932 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rvq4c" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.720873 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.744204 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" podStartSLOduration=4.744183682 podStartE2EDuration="4.744183682s" podCreationTimestamp="2025-10-06 15:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:36.659583668 +0000 UTC m=+996.471934386" watchObservedRunningTime="2025-10-06 15:17:36.744183682 +0000 UTC m=+996.556534400" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.751396 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9k89\" (UniqueName: \"kubernetes.io/projected/33e3ab89-3ae3-42ef-b94c-fdc2c205e105-kube-api-access-x9k89\") pod \"neutron-e5a4-account-create-hf6jn\" (UID: \"33e3ab89-3ae3-42ef-b94c-fdc2c205e105\") " pod="openstack/neutron-e5a4-account-create-hf6jn" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.782835 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.782881 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-logs\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.782931 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.782986 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-config-data\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.783010 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ms79\" (UniqueName: \"kubernetes.io/projected/98b2c028-29f0-40d4-a468-4bfe1e51407c-kube-api-access-6ms79\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.783055 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.783099 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-scripts\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.783502 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c32f-account-create-g69zm" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.795362 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5a4-account-create-hf6jn" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.884814 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.884895 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-config-data\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.884915 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ms79\" (UniqueName: \"kubernetes.io/projected/98b2c028-29f0-40d4-a468-4bfe1e51407c-kube-api-access-6ms79\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.884951 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.884989 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-scripts\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.885047 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.885065 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-logs\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.885599 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-logs\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.889338 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.891697 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.896191 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-config-data\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.898350 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-scripts\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.916024 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:36 crc kubenswrapper[4888]: I1006 15:17:36.951968 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ms79\" (UniqueName: \"kubernetes.io/projected/98b2c028-29f0-40d4-a468-4bfe1e51407c-kube-api-access-6ms79\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.019742 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.064572 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.188916 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:37 crc kubenswrapper[4888]: E1006 15:17:37.189414 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" containerName="init" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.189428 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" containerName="init" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.189618 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" containerName="init" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.190481 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.197541 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.205546 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-nb\") pod \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.205647 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-config\") pod \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.205681 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-svc\") pod \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.205756 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgx7p\" (UniqueName: \"kubernetes.io/projected/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-kube-api-access-dgx7p\") pod \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.205828 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-sb\") pod \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.205908 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-swift-storage-0\") pod \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\" (UID: \"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.234159 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.247917 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-kube-api-access-dgx7p" (OuterVolumeSpecName: "kube-api-access-dgx7p") pod "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" (UID: "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e"). InnerVolumeSpecName "kube-api-access-dgx7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.250098 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.288787 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" (UID: "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.295599 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" (UID: "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.308001 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.308246 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.308405 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.308494 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.308592 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-logs\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.308699 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.308882 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7qd\" (UniqueName: \"kubernetes.io/projected/70502939-44bc-4e73-933d-22ad0fa1657d-kube-api-access-kh7qd\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.309051 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.309128 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgx7p\" (UniqueName: \"kubernetes.io/projected/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-kube-api-access-dgx7p\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.309209 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.335548 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" (UID: "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.367532 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-config" (OuterVolumeSpecName: "config") pod "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" (UID: "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.371666 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" (UID: "4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.412614 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.412838 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh7qd\" (UniqueName: \"kubernetes.io/projected/70502939-44bc-4e73-933d-22ad0fa1657d-kube-api-access-kh7qd\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.412961 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.413014 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.413123 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.413164 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.413192 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-logs\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.413311 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.413328 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.413340 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.414115 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-logs\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.416320 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.420114 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.432229 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.452550 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.452568 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.466029 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh7qd\" (UniqueName: \"kubernetes.io/projected/70502939-44bc-4e73-933d-22ad0fa1657d-kube-api-access-kh7qd\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.481195 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8547c88cc-gnkgd"] Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.507785 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.655981 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8547c88cc-gnkgd" event={"ID":"e1dd4b83-bf5d-4198-b411-97a5a8d057d6","Type":"ContainerStarted","Data":"130eba8e2a69c54462e8ad4945fc6ecaab8c85bd38875f88432afebe1e627890"} Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.662519 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" event={"ID":"4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e","Type":"ContainerDied","Data":"39b6e57c92f706bdddac0bdb9412b23fa74524f6c16bb034ba7699d558caf0d5"} Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.662568 4888 scope.go:117] "RemoveContainer" containerID="a040faa220070d8ee26973b74a8d45c7ac2348c17a65c0d0bdfe2d922ffefc3a" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.662696 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-ppwsl" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.678334 4888 generic.go:334] "Generic (PLEG): container finished" podID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" containerID="5508e4a3da24bdc6dae042d22ae24baf284fc904142d86da144d11b68c5a62d4" exitCode=0 Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.678387 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" event={"ID":"1a1f4d22-5be7-4de4-81de-dc0eaeebec77","Type":"ContainerDied","Data":"5508e4a3da24bdc6dae042d22ae24baf284fc904142d86da144d11b68c5a62d4"} Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.792074 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8kzqg"] Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.818390 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ppwsl"] Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.819501 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.819527 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.840520 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-ppwsl"] Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.911455 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e12a-account-create-d7cmc"] Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.958362 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-config\") pod \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.958428 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-swift-storage-0\") pod \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.958479 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-svc\") pod \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.958680 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvxcq\" (UniqueName: \"kubernetes.io/projected/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-kube-api-access-jvxcq\") pod \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.958725 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-nb\") pod \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.958932 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-sb\") pod \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\" (UID: \"1a1f4d22-5be7-4de4-81de-dc0eaeebec77\") " Oct 06 15:17:37 crc kubenswrapper[4888]: I1006 15:17:37.972953 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-kube-api-access-jvxcq" (OuterVolumeSpecName: "kube-api-access-jvxcq") pod "1a1f4d22-5be7-4de4-81de-dc0eaeebec77" (UID: "1a1f4d22-5be7-4de4-81de-dc0eaeebec77"). InnerVolumeSpecName "kube-api-access-jvxcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.067993 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvxcq\" (UniqueName: \"kubernetes.io/projected/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-kube-api-access-jvxcq\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.141960 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e5a4-account-create-hf6jn"] Oct 06 15:17:38 crc kubenswrapper[4888]: W1006 15:17:38.145336 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33e3ab89_3ae3_42ef_b94c_fdc2c205e105.slice/crio-8bede56a46753ef009006afab28399d0cb679a32e8c56ac4b8ea0ad327a2861d WatchSource:0}: Error finding container 8bede56a46753ef009006afab28399d0cb679a32e8c56ac4b8ea0ad327a2861d: Status 404 returned error can't find the container with id 8bede56a46753ef009006afab28399d0cb679a32e8c56ac4b8ea0ad327a2861d Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.154647 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c32f-account-create-g69zm"] Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.240647 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a1f4d22-5be7-4de4-81de-dc0eaeebec77" (UID: "1a1f4d22-5be7-4de4-81de-dc0eaeebec77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.252642 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1a1f4d22-5be7-4de4-81de-dc0eaeebec77" (UID: "1a1f4d22-5be7-4de4-81de-dc0eaeebec77"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.271389 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-config" (OuterVolumeSpecName: "config") pod "1a1f4d22-5be7-4de4-81de-dc0eaeebec77" (UID: "1a1f4d22-5be7-4de4-81de-dc0eaeebec77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.275055 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.275082 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:38 crc kubenswrapper[4888]: I1006 15:17:38.275094 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.318833 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a1f4d22-5be7-4de4-81de-dc0eaeebec77" (UID: "1a1f4d22-5be7-4de4-81de-dc0eaeebec77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.331134 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a1f4d22-5be7-4de4-81de-dc0eaeebec77" (UID: "1a1f4d22-5be7-4de4-81de-dc0eaeebec77"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.377628 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.377896 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a1f4d22-5be7-4de4-81de-dc0eaeebec77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.532256 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.696788 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5a4-account-create-hf6jn" event={"ID":"33e3ab89-3ae3-42ef-b94c-fdc2c205e105","Type":"ContainerStarted","Data":"8bede56a46753ef009006afab28399d0cb679a32e8c56ac4b8ea0ad327a2861d"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.701161 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" event={"ID":"1a1f4d22-5be7-4de4-81de-dc0eaeebec77","Type":"ContainerDied","Data":"557334144a0883ad73263c132645dc9127db6dd66fe33b911199a4f9b7ce955a"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.701277 4888 scope.go:117] "RemoveContainer" containerID="5508e4a3da24bdc6dae042d22ae24baf284fc904142d86da144d11b68c5a62d4" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.701433 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-mk69j" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.711561 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c32f-account-create-g69zm" event={"ID":"6ada5181-629e-4efb-97e8-e21d5d601d09","Type":"ContainerStarted","Data":"f74f57f36a9ba60a42364b4bb0493b157a03a58748942dd32ae9b0f8ea33c175"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.716407 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98b2c028-29f0-40d4-a468-4bfe1e51407c","Type":"ContainerStarted","Data":"56731ffcd074ffda62717da64e38e430c5dbdaa593cd146e1c881d7d434c7bd8"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.721845 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e12a-account-create-d7cmc" event={"ID":"c98f487b-c9a8-402c-9595-d5bc0e7c66fa","Type":"ContainerStarted","Data":"09e4920129b22dbbf6d9ddf47b12be07bf57f42a97305fec4bc5db3043e8563b"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.731816 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" event={"ID":"4a4b9805-b00b-4f77-9df2-4e93a713e673","Type":"ContainerStarted","Data":"d3b0adbdfd257b0620792699022012151a1f9abf6ea8d59f732b8b24520f87b9"} Oct 06 15:17:39 crc kubenswrapper[4888]: W1006 15:17:38.734937 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70502939_44bc_4e73_933d_22ad0fa1657d.slice/crio-15cc028cff71e032e8ed21f5f6b6ff0ffe04adf25cd45715e60b1580f50bddc3 WatchSource:0}: Error finding container 15cc028cff71e032e8ed21f5f6b6ff0ffe04adf25cd45715e60b1580f50bddc3: Status 404 returned error can't find the container with id 15cc028cff71e032e8ed21f5f6b6ff0ffe04adf25cd45715e60b1580f50bddc3 Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:38.739494 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.021983 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e" path="/var/lib/kubelet/pods/4e6d1c77-cc0d-4bd7-a5cc-f82d44e7963e/volumes" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.039944 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-mk69j"] Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.047447 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-mk69j"] Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.058729 4888 scope.go:117] "RemoveContainer" containerID="0510b47276d37295fb938916f75af7b837f53aa360c9b4bc9b293b39c6b90a7e" Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.761609 4888 generic.go:334] "Generic (PLEG): container finished" podID="33e3ab89-3ae3-42ef-b94c-fdc2c205e105" containerID="da3ff19de0e0693f54dca0017bb4177218b6679a6c097a42bbf619d52b34f2f7" exitCode=0 Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.761673 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5a4-account-create-hf6jn" event={"ID":"33e3ab89-3ae3-42ef-b94c-fdc2c205e105","Type":"ContainerDied","Data":"da3ff19de0e0693f54dca0017bb4177218b6679a6c097a42bbf619d52b34f2f7"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.768256 4888 generic.go:334] "Generic (PLEG): container finished" podID="6ada5181-629e-4efb-97e8-e21d5d601d09" containerID="5d78e5738988f3a051fad138a86641d05f7d98375731cbc9ec37fdd673bfc9d1" exitCode=0 Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.768324 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c32f-account-create-g69zm" event={"ID":"6ada5181-629e-4efb-97e8-e21d5d601d09","Type":"ContainerDied","Data":"5d78e5738988f3a051fad138a86641d05f7d98375731cbc9ec37fdd673bfc9d1"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.771935 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70502939-44bc-4e73-933d-22ad0fa1657d","Type":"ContainerStarted","Data":"15cc028cff71e032e8ed21f5f6b6ff0ffe04adf25cd45715e60b1580f50bddc3"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.782573 4888 generic.go:334] "Generic (PLEG): container finished" podID="c98f487b-c9a8-402c-9595-d5bc0e7c66fa" containerID="3cbccdc8bfdf1fedc812dd4f543dab3b10769aa7f88f9e7bde8dce16828cefbd" exitCode=0 Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.782642 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e12a-account-create-d7cmc" event={"ID":"c98f487b-c9a8-402c-9595-d5bc0e7c66fa","Type":"ContainerDied","Data":"3cbccdc8bfdf1fedc812dd4f543dab3b10769aa7f88f9e7bde8dce16828cefbd"} Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.793499 4888 generic.go:334] "Generic (PLEG): container finished" podID="4a4b9805-b00b-4f77-9df2-4e93a713e673" containerID="f3fc45b407916b9417eb0de175fe7d083bb034f4e52c8b1462d43c2c72f39d54" exitCode=0 Oct 06 15:17:39 crc kubenswrapper[4888]: I1006 15:17:39.793735 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" event={"ID":"4a4b9805-b00b-4f77-9df2-4e93a713e673","Type":"ContainerDied","Data":"f3fc45b407916b9417eb0de175fe7d083bb034f4e52c8b1462d43c2c72f39d54"} Oct 06 15:17:40 crc kubenswrapper[4888]: I1006 15:17:40.808617 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70502939-44bc-4e73-933d-22ad0fa1657d","Type":"ContainerStarted","Data":"2bddd7936dfe41981b0ae29c6b9e18894ab0b30b7ff9ce4dcaf00011c115e633"} Oct 06 15:17:40 crc kubenswrapper[4888]: I1006 15:17:40.811311 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98b2c028-29f0-40d4-a468-4bfe1e51407c","Type":"ContainerStarted","Data":"b64561bff123203328b4c1331ac37c8731ab29f517398b5ad576f9f2034fcb93"} Oct 06 15:17:40 crc kubenswrapper[4888]: I1006 15:17:40.823367 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" event={"ID":"4a4b9805-b00b-4f77-9df2-4e93a713e673","Type":"ContainerStarted","Data":"6a6154567e8a6e33eedfde4e3c07bab975dabaa0e6222049efaaf552c48418c9"} Oct 06 15:17:40 crc kubenswrapper[4888]: I1006 15:17:40.949272 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" path="/var/lib/kubelet/pods/1a1f4d22-5be7-4de4-81de-dc0eaeebec77/volumes" Oct 06 15:17:40 crc kubenswrapper[4888]: I1006 15:17:40.970027 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" podStartSLOduration=5.970003813 podStartE2EDuration="5.970003813s" podCreationTimestamp="2025-10-06 15:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:40.847090013 +0000 UTC m=+1000.659440741" watchObservedRunningTime="2025-10-06 15:17:40.970003813 +0000 UTC m=+1000.782354531" Oct 06 15:17:41 crc kubenswrapper[4888]: I1006 15:17:41.495203 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:41 crc kubenswrapper[4888]: I1006 15:17:41.837993 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70502939-44bc-4e73-933d-22ad0fa1657d","Type":"ContainerStarted","Data":"b48a1660dc35c9828de82116ca6d13f2174568fa3c0bc0119cfa9769973597b9"} Oct 06 15:17:41 crc kubenswrapper[4888]: I1006 15:17:41.846365 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98b2c028-29f0-40d4-a468-4bfe1e51407c","Type":"ContainerStarted","Data":"c046024e81f883856f34a082aaaafea9a787ec2cd4fbf2411d563df7d4247359"} Oct 06 15:17:41 crc kubenswrapper[4888]: I1006 15:17:41.867007 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.866988234 podStartE2EDuration="5.866988234s" podCreationTimestamp="2025-10-06 15:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:41.860397856 +0000 UTC m=+1001.672748574" watchObservedRunningTime="2025-10-06 15:17:41.866988234 +0000 UTC m=+1001.679338952" Oct 06 15:17:41 crc kubenswrapper[4888]: I1006 15:17:41.895304 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.8952847649999995 podStartE2EDuration="6.895284765s" podCreationTimestamp="2025-10-06 15:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:41.891680181 +0000 UTC m=+1001.704030899" watchObservedRunningTime="2025-10-06 15:17:41.895284765 +0000 UTC m=+1001.707635483" Oct 06 15:17:42 crc kubenswrapper[4888]: I1006 15:17:42.890894 4888 generic.go:334] "Generic (PLEG): container finished" podID="35407e40-4640-4155-9fef-46cea3f2dee8" containerID="3cfed561717b9a8b3c32b4bb051f5ecf3a5573d04bed4e8747ed7f2521bf7419" exitCode=0 Oct 06 15:17:42 crc kubenswrapper[4888]: I1006 15:17:42.891014 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9p29" event={"ID":"35407e40-4640-4155-9fef-46cea3f2dee8","Type":"ContainerDied","Data":"3cfed561717b9a8b3c32b4bb051f5ecf3a5573d04bed4e8747ed7f2521bf7419"} Oct 06 15:17:43 crc kubenswrapper[4888]: I1006 15:17:43.900732 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c32f-account-create-g69zm" event={"ID":"6ada5181-629e-4efb-97e8-e21d5d601d09","Type":"ContainerDied","Data":"f74f57f36a9ba60a42364b4bb0493b157a03a58748942dd32ae9b0f8ea33c175"} Oct 06 15:17:43 crc kubenswrapper[4888]: I1006 15:17:43.900962 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74f57f36a9ba60a42364b4bb0493b157a03a58748942dd32ae9b0f8ea33c175" Oct 06 15:17:43 crc kubenswrapper[4888]: I1006 15:17:43.903371 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e12a-account-create-d7cmc" event={"ID":"c98f487b-c9a8-402c-9595-d5bc0e7c66fa","Type":"ContainerDied","Data":"09e4920129b22dbbf6d9ddf47b12be07bf57f42a97305fec4bc5db3043e8563b"} Oct 06 15:17:43 crc kubenswrapper[4888]: I1006 15:17:43.903402 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09e4920129b22dbbf6d9ddf47b12be07bf57f42a97305fec4bc5db3043e8563b" Oct 06 15:17:43 crc kubenswrapper[4888]: I1006 15:17:43.979087 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c32f-account-create-g69zm" Oct 06 15:17:43 crc kubenswrapper[4888]: I1006 15:17:43.992044 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e12a-account-create-d7cmc" Oct 06 15:17:44 crc kubenswrapper[4888]: I1006 15:17:44.020201 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gzzx\" (UniqueName: \"kubernetes.io/projected/6ada5181-629e-4efb-97e8-e21d5d601d09-kube-api-access-4gzzx\") pod \"6ada5181-629e-4efb-97e8-e21d5d601d09\" (UID: \"6ada5181-629e-4efb-97e8-e21d5d601d09\") " Oct 06 15:17:44 crc kubenswrapper[4888]: I1006 15:17:44.048690 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ada5181-629e-4efb-97e8-e21d5d601d09-kube-api-access-4gzzx" (OuterVolumeSpecName: "kube-api-access-4gzzx") pod "6ada5181-629e-4efb-97e8-e21d5d601d09" (UID: "6ada5181-629e-4efb-97e8-e21d5d601d09"). InnerVolumeSpecName "kube-api-access-4gzzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:44 crc kubenswrapper[4888]: I1006 15:17:44.124872 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxcbg\" (UniqueName: \"kubernetes.io/projected/c98f487b-c9a8-402c-9595-d5bc0e7c66fa-kube-api-access-xxcbg\") pod \"c98f487b-c9a8-402c-9595-d5bc0e7c66fa\" (UID: \"c98f487b-c9a8-402c-9595-d5bc0e7c66fa\") " Oct 06 15:17:44 crc kubenswrapper[4888]: I1006 15:17:44.125963 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gzzx\" (UniqueName: \"kubernetes.io/projected/6ada5181-629e-4efb-97e8-e21d5d601d09-kube-api-access-4gzzx\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:44 crc kubenswrapper[4888]: I1006 15:17:44.135318 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98f487b-c9a8-402c-9595-d5bc0e7c66fa-kube-api-access-xxcbg" (OuterVolumeSpecName: "kube-api-access-xxcbg") pod "c98f487b-c9a8-402c-9595-d5bc0e7c66fa" (UID: "c98f487b-c9a8-402c-9595-d5bc0e7c66fa"). InnerVolumeSpecName "kube-api-access-xxcbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:44 crc kubenswrapper[4888]: I1006 15:17:44.231025 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxcbg\" (UniqueName: \"kubernetes.io/projected/c98f487b-c9a8-402c-9595-d5bc0e7c66fa-kube-api-access-xxcbg\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:44 crc kubenswrapper[4888]: I1006 15:17:44.914779 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e12a-account-create-d7cmc" Oct 06 15:17:44 crc kubenswrapper[4888]: I1006 15:17:44.914877 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c32f-account-create-g69zm" Oct 06 15:17:45 crc kubenswrapper[4888]: E1006 15:17:45.176099 4888 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ada5181_629e_4efb_97e8_e21d5d601d09.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ada5181_629e_4efb_97e8_e21d5d601d09.slice/crio-f74f57f36a9ba60a42364b4bb0493b157a03a58748942dd32ae9b0f8ea33c175\": RecentStats: unable to find data in memory cache]" Oct 06 15:17:45 crc kubenswrapper[4888]: I1006 15:17:45.917365 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:45 crc kubenswrapper[4888]: I1006 15:17:45.917930 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerName="glance-log" containerID="cri-o://b64561bff123203328b4c1331ac37c8731ab29f517398b5ad576f9f2034fcb93" gracePeriod=30 Oct 06 15:17:45 crc kubenswrapper[4888]: I1006 15:17:45.918090 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerName="glance-httpd" containerID="cri-o://c046024e81f883856f34a082aaaafea9a787ec2cd4fbf2411d563df7d4247359" gracePeriod=30 Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.164435 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.164934 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" containerName="glance-log" containerID="cri-o://2bddd7936dfe41981b0ae29c6b9e18894ab0b30b7ff9ce4dcaf00011c115e633" gracePeriod=30 Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.165327 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" containerName="glance-httpd" containerID="cri-o://b48a1660dc35c9828de82116ca6d13f2174568fa3c0bc0119cfa9769973597b9" gracePeriod=30 Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.425840 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dd84dc557-97qj7"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.472661 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-ffc855b96-nhf9w"] Oct 06 15:17:46 crc kubenswrapper[4888]: E1006 15:17:46.473165 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98f487b-c9a8-402c-9595-d5bc0e7c66fa" containerName="mariadb-account-create" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.473192 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98f487b-c9a8-402c-9595-d5bc0e7c66fa" containerName="mariadb-account-create" Oct 06 15:17:46 crc kubenswrapper[4888]: E1006 15:17:46.473223 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" containerName="dnsmasq-dns" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.473233 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" containerName="dnsmasq-dns" Oct 06 15:17:46 crc kubenswrapper[4888]: E1006 15:17:46.473249 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ada5181-629e-4efb-97e8-e21d5d601d09" containerName="mariadb-account-create" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.473259 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ada5181-629e-4efb-97e8-e21d5d601d09" containerName="mariadb-account-create" Oct 06 15:17:46 crc kubenswrapper[4888]: E1006 15:17:46.473282 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" containerName="init" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.473290 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" containerName="init" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.473494 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ada5181-629e-4efb-97e8-e21d5d601d09" containerName="mariadb-account-create" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.473507 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a1f4d22-5be7-4de4-81de-dc0eaeebec77" containerName="dnsmasq-dns" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.473521 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98f487b-c9a8-402c-9595-d5bc0e7c66fa" containerName="mariadb-account-create" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.474697 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.478720 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.501016 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.502969 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ffc855b96-nhf9w"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.563861 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-h6d7m"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.565325 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.574681 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-combined-ca-bundle\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.574722 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-config-data\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.574749 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms462\" (UniqueName: \"kubernetes.io/projected/0574c745-cac5-4deb-87cc-a04c1b09aa9a-kube-api-access-ms462\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.574768 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-secret-key\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.574830 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0574c745-cac5-4deb-87cc-a04c1b09aa9a-logs\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.574879 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-tls-certs\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.574977 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-scripts\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.579700 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.589456 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k2zw7" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.589643 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.589938 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-h6d7m"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.621213 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8547c88cc-gnkgd"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.656883 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6588c6d648-nnxsr"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.658368 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.684887 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-db-sync-config-data\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.684967 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-scripts\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685091 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-combined-ca-bundle\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685128 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-config-data\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685159 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms462\" (UniqueName: \"kubernetes.io/projected/0574c745-cac5-4deb-87cc-a04c1b09aa9a-kube-api-access-ms462\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685186 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-secret-key\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685209 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-combined-ca-bundle\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685281 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0574c745-cac5-4deb-87cc-a04c1b09aa9a-logs\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685302 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-config-data\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685356 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tzpd\" (UniqueName: \"kubernetes.io/projected/caf441af-cd19-416e-9759-8634523c0979-kube-api-access-4tzpd\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685380 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-tls-certs\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685400 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-scripts\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.685434 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/caf441af-cd19-416e-9759-8634523c0979-etc-machine-id\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.686775 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-config-data\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.686987 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0574c745-cac5-4deb-87cc-a04c1b09aa9a-logs\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.687643 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-scripts\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.695483 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-tls-certs\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.700860 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tbcr2"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.701318 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-tbcr2" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" containerName="dnsmasq-dns" containerID="cri-o://fe2f9dc8b1e5c7fe90cafaba595ece0e1647c9612250257839bcf166ad4860ea" gracePeriod=10 Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.710463 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6588c6d648-nnxsr"] Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.715288 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-secret-key\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.719528 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-combined-ca-bundle\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.732067 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms462\" (UniqueName: \"kubernetes.io/projected/0574c745-cac5-4deb-87cc-a04c1b09aa9a-kube-api-access-ms462\") pod \"horizon-ffc855b96-nhf9w\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.786853 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-combined-ca-bundle\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.786932 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-horizon-tls-certs\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.786970 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-horizon-secret-key\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.786995 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64852c10-aeb0-424b-a601-0b46718c0fc7-config-data\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787021 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-config-data\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787047 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64852c10-aeb0-424b-a601-0b46718c0fc7-scripts\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787089 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tzpd\" (UniqueName: \"kubernetes.io/projected/caf441af-cd19-416e-9759-8634523c0979-kube-api-access-4tzpd\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787112 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-scripts\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787145 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/caf441af-cd19-416e-9759-8634523c0979-etc-machine-id\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787206 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-db-sync-config-data\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787234 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26hnl\" (UniqueName: \"kubernetes.io/projected/64852c10-aeb0-424b-a601-0b46718c0fc7-kube-api-access-26hnl\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787304 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-combined-ca-bundle\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.787331 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64852c10-aeb0-424b-a601-0b46718c0fc7-logs\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.795044 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/caf441af-cd19-416e-9759-8634523c0979-etc-machine-id\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.799407 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-config-data\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.800335 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-scripts\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.802010 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.817652 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-combined-ca-bundle\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.818358 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-db-sync-config-data\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.855713 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tzpd\" (UniqueName: \"kubernetes.io/projected/caf441af-cd19-416e-9759-8634523c0979-kube-api-access-4tzpd\") pod \"cinder-db-sync-h6d7m\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.888878 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-combined-ca-bundle\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.888960 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64852c10-aeb0-424b-a601-0b46718c0fc7-logs\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.889529 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-horizon-tls-certs\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.889601 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-horizon-secret-key\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.889638 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64852c10-aeb0-424b-a601-0b46718c0fc7-config-data\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.889723 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64852c10-aeb0-424b-a601-0b46718c0fc7-scripts\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.889910 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26hnl\" (UniqueName: \"kubernetes.io/projected/64852c10-aeb0-424b-a601-0b46718c0fc7-kube-api-access-26hnl\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.892645 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64852c10-aeb0-424b-a601-0b46718c0fc7-logs\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.893133 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64852c10-aeb0-424b-a601-0b46718c0fc7-scripts\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.893421 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64852c10-aeb0-424b-a601-0b46718c0fc7-config-data\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.893513 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.897781 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-combined-ca-bundle\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.903573 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-horizon-tls-certs\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.907386 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64852c10-aeb0-424b-a601-0b46718c0fc7-horizon-secret-key\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.936296 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26hnl\" (UniqueName: \"kubernetes.io/projected/64852c10-aeb0-424b-a601-0b46718c0fc7-kube-api-access-26hnl\") pod \"horizon-6588c6d648-nnxsr\" (UID: \"64852c10-aeb0-424b-a601-0b46718c0fc7\") " pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.988253 4888 generic.go:334] "Generic (PLEG): container finished" podID="70502939-44bc-4e73-933d-22ad0fa1657d" containerID="2bddd7936dfe41981b0ae29c6b9e18894ab0b30b7ff9ce4dcaf00011c115e633" exitCode=143 Oct 06 15:17:46 crc kubenswrapper[4888]: I1006 15:17:46.995737 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70502939-44bc-4e73-933d-22ad0fa1657d","Type":"ContainerDied","Data":"2bddd7936dfe41981b0ae29c6b9e18894ab0b30b7ff9ce4dcaf00011c115e633"} Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.002049 4888 generic.go:334] "Generic (PLEG): container finished" podID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerID="b64561bff123203328b4c1331ac37c8731ab29f517398b5ad576f9f2034fcb93" exitCode=143 Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.002102 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98b2c028-29f0-40d4-a468-4bfe1e51407c","Type":"ContainerDied","Data":"b64561bff123203328b4c1331ac37c8731ab29f517398b5ad576f9f2034fcb93"} Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.006493 4888 generic.go:334] "Generic (PLEG): container finished" podID="f107030a-25fb-4025-a397-54c4e90b3a60" containerID="fe2f9dc8b1e5c7fe90cafaba595ece0e1647c9612250257839bcf166ad4860ea" exitCode=0 Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.006565 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tbcr2" event={"ID":"f107030a-25fb-4025-a397-54c4e90b3a60","Type":"ContainerDied","Data":"fe2f9dc8b1e5c7fe90cafaba595ece0e1647c9612250257839bcf166ad4860ea"} Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.054476 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jxszs"] Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.056015 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.062487 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8dkk6" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.063158 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.080550 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jxszs"] Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.111324 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.201103 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzsg\" (UniqueName: \"kubernetes.io/projected/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-kube-api-access-dvzsg\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.201187 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-combined-ca-bundle\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.201240 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-db-sync-config-data\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.303134 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-db-sync-config-data\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.303275 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvzsg\" (UniqueName: \"kubernetes.io/projected/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-kube-api-access-dvzsg\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.303311 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-combined-ca-bundle\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.308206 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-db-sync-config-data\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.309463 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-combined-ca-bundle\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.322368 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvzsg\" (UniqueName: \"kubernetes.io/projected/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-kube-api-access-dvzsg\") pod \"barbican-db-sync-jxszs\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.375702 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jxszs" Oct 06 15:17:47 crc kubenswrapper[4888]: I1006 15:17:47.664391 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tbcr2" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Oct 06 15:17:48 crc kubenswrapper[4888]: I1006 15:17:48.026925 4888 generic.go:334] "Generic (PLEG): container finished" podID="70502939-44bc-4e73-933d-22ad0fa1657d" containerID="b48a1660dc35c9828de82116ca6d13f2174568fa3c0bc0119cfa9769973597b9" exitCode=0 Oct 06 15:17:48 crc kubenswrapper[4888]: I1006 15:17:48.027004 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70502939-44bc-4e73-933d-22ad0fa1657d","Type":"ContainerDied","Data":"b48a1660dc35c9828de82116ca6d13f2174568fa3c0bc0119cfa9769973597b9"} Oct 06 15:17:48 crc kubenswrapper[4888]: I1006 15:17:48.031204 4888 generic.go:334] "Generic (PLEG): container finished" podID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerID="c046024e81f883856f34a082aaaafea9a787ec2cd4fbf2411d563df7d4247359" exitCode=0 Oct 06 15:17:48 crc kubenswrapper[4888]: I1006 15:17:48.031249 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98b2c028-29f0-40d4-a468-4bfe1e51407c","Type":"ContainerDied","Data":"c046024e81f883856f34a082aaaafea9a787ec2cd4fbf2411d563df7d4247359"} Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.222323 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.282090 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhxlg\" (UniqueName: \"kubernetes.io/projected/35407e40-4640-4155-9fef-46cea3f2dee8-kube-api-access-dhxlg\") pod \"35407e40-4640-4155-9fef-46cea3f2dee8\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.282138 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-credential-keys\") pod \"35407e40-4640-4155-9fef-46cea3f2dee8\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.282349 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-config-data\") pod \"35407e40-4640-4155-9fef-46cea3f2dee8\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.283045 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-fernet-keys\") pod \"35407e40-4640-4155-9fef-46cea3f2dee8\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.283127 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-scripts\") pod \"35407e40-4640-4155-9fef-46cea3f2dee8\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.283498 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-combined-ca-bundle\") pod \"35407e40-4640-4155-9fef-46cea3f2dee8\" (UID: \"35407e40-4640-4155-9fef-46cea3f2dee8\") " Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.287882 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "35407e40-4640-4155-9fef-46cea3f2dee8" (UID: "35407e40-4640-4155-9fef-46cea3f2dee8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.291929 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-scripts" (OuterVolumeSpecName: "scripts") pod "35407e40-4640-4155-9fef-46cea3f2dee8" (UID: "35407e40-4640-4155-9fef-46cea3f2dee8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.293324 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "35407e40-4640-4155-9fef-46cea3f2dee8" (UID: "35407e40-4640-4155-9fef-46cea3f2dee8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.294660 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35407e40-4640-4155-9fef-46cea3f2dee8-kube-api-access-dhxlg" (OuterVolumeSpecName: "kube-api-access-dhxlg") pod "35407e40-4640-4155-9fef-46cea3f2dee8" (UID: "35407e40-4640-4155-9fef-46cea3f2dee8"). InnerVolumeSpecName "kube-api-access-dhxlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.312836 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-config-data" (OuterVolumeSpecName: "config-data") pod "35407e40-4640-4155-9fef-46cea3f2dee8" (UID: "35407e40-4640-4155-9fef-46cea3f2dee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.313170 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35407e40-4640-4155-9fef-46cea3f2dee8" (UID: "35407e40-4640-4155-9fef-46cea3f2dee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.386688 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhxlg\" (UniqueName: \"kubernetes.io/projected/35407e40-4640-4155-9fef-46cea3f2dee8-kube-api-access-dhxlg\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.386725 4888 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.386739 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.386750 4888 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.386760 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:51 crc kubenswrapper[4888]: I1006 15:17:51.386770 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35407e40-4640-4155-9fef-46cea3f2dee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.070642 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9p29" event={"ID":"35407e40-4640-4155-9fef-46cea3f2dee8","Type":"ContainerDied","Data":"a122a10737b9e58acaf39e364356c66f1f2c25550597d5de59694a06341297ad"} Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.070695 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a122a10737b9e58acaf39e364356c66f1f2c25550597d5de59694a06341297ad" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.071147 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9p29" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.299512 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h9p29"] Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.307533 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h9p29"] Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.404113 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-knz4q"] Oct 06 15:17:52 crc kubenswrapper[4888]: E1006 15:17:52.404570 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35407e40-4640-4155-9fef-46cea3f2dee8" containerName="keystone-bootstrap" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.404603 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="35407e40-4640-4155-9fef-46cea3f2dee8" containerName="keystone-bootstrap" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.404759 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="35407e40-4640-4155-9fef-46cea3f2dee8" containerName="keystone-bootstrap" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.405345 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.407226 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pj4xj" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.409427 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.409450 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.410567 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.426039 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-knz4q"] Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.512919 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-credential-keys\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.512971 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-combined-ca-bundle\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.513006 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-config-data\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.513054 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85r56\" (UniqueName: \"kubernetes.io/projected/df1ea537-b6a4-49dc-b215-a6b65ed08933-kube-api-access-85r56\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.513068 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-fernet-keys\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.513096 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-scripts\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.614937 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-credential-keys\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.614990 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-combined-ca-bundle\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.615022 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-config-data\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.615117 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85r56\" (UniqueName: \"kubernetes.io/projected/df1ea537-b6a4-49dc-b215-a6b65ed08933-kube-api-access-85r56\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.615138 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-fernet-keys\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.615168 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-scripts\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.619920 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-scripts\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.619995 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-credential-keys\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.620735 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-fernet-keys\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.621921 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-config-data\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.626999 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-combined-ca-bundle\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.631490 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85r56\" (UniqueName: \"kubernetes.io/projected/df1ea537-b6a4-49dc-b215-a6b65ed08933-kube-api-access-85r56\") pod \"keystone-bootstrap-knz4q\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.664648 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-tbcr2" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.722104 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:17:52 crc kubenswrapper[4888]: E1006 15:17:52.922523 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 06 15:17:52 crc kubenswrapper[4888]: E1006 15:17:52.922690 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc5h54bh569h698hf5h55bhd6h5c8h686h96h5bfh584h6dh68h5ffhbfh585h646h5fdhb5h56dh654h4h58dh5fh549hdfh655hd5h675h58bh66dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljtx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7dd84dc557-97qj7_openstack(2fc1c76f-dc92-49a3-a5fa-9537a814eb82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:17:52 crc kubenswrapper[4888]: E1006 15:17:52.927374 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7dd84dc557-97qj7" podUID="2fc1c76f-dc92-49a3-a5fa-9537a814eb82" Oct 06 15:17:52 crc kubenswrapper[4888]: I1006 15:17:52.940624 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35407e40-4640-4155-9fef-46cea3f2dee8" path="/var/lib/kubelet/pods/35407e40-4640-4155-9fef-46cea3f2dee8/volumes" Oct 06 15:17:53 crc kubenswrapper[4888]: I1006 15:17:53.066723 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5a4-account-create-hf6jn" Oct 06 15:17:53 crc kubenswrapper[4888]: I1006 15:17:53.109192 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5a4-account-create-hf6jn" Oct 06 15:17:53 crc kubenswrapper[4888]: I1006 15:17:53.109583 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5a4-account-create-hf6jn" event={"ID":"33e3ab89-3ae3-42ef-b94c-fdc2c205e105","Type":"ContainerDied","Data":"8bede56a46753ef009006afab28399d0cb679a32e8c56ac4b8ea0ad327a2861d"} Oct 06 15:17:53 crc kubenswrapper[4888]: I1006 15:17:53.109606 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bede56a46753ef009006afab28399d0cb679a32e8c56ac4b8ea0ad327a2861d" Oct 06 15:17:53 crc kubenswrapper[4888]: I1006 15:17:53.132010 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9k89\" (UniqueName: \"kubernetes.io/projected/33e3ab89-3ae3-42ef-b94c-fdc2c205e105-kube-api-access-x9k89\") pod \"33e3ab89-3ae3-42ef-b94c-fdc2c205e105\" (UID: \"33e3ab89-3ae3-42ef-b94c-fdc2c205e105\") " Oct 06 15:17:53 crc kubenswrapper[4888]: I1006 15:17:53.164370 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e3ab89-3ae3-42ef-b94c-fdc2c205e105-kube-api-access-x9k89" (OuterVolumeSpecName: "kube-api-access-x9k89") pod "33e3ab89-3ae3-42ef-b94c-fdc2c205e105" (UID: "33e3ab89-3ae3-42ef-b94c-fdc2c205e105"). InnerVolumeSpecName "kube-api-access-x9k89". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:53 crc kubenswrapper[4888]: I1006 15:17:53.245890 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9k89\" (UniqueName: \"kubernetes.io/projected/33e3ab89-3ae3-42ef-b94c-fdc2c205e105-kube-api-access-x9k89\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:53 crc kubenswrapper[4888]: E1006 15:17:53.711255 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Oct 06 15:17:53 crc kubenswrapper[4888]: E1006 15:17:53.711422 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599hc6h57hd5h68ch5bfh645h678hc5h564h58bh666h5dh5d8h5d7h584h555h576hd8h55bh54dh9h5f4hdh69h5fchf6h549h677h4h5c9h86q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmgvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2ceb4186-79b8-4dc6-b54c-7e0681764d35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.118673 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-tbcr2" event={"ID":"f107030a-25fb-4025-a397-54c4e90b3a60","Type":"ContainerDied","Data":"a4063be1f7b171feab37d52bcce0eeeccaabd27d4c81bb789d8f6bac1452401a"} Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.119312 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4063be1f7b171feab37d52bcce0eeeccaabd27d4c81bb789d8f6bac1452401a" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.132708 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"70502939-44bc-4e73-933d-22ad0fa1657d","Type":"ContainerDied","Data":"15cc028cff71e032e8ed21f5f6b6ff0ffe04adf25cd45715e60b1580f50bddc3"} Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.132739 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15cc028cff71e032e8ed21f5f6b6ff0ffe04adf25cd45715e60b1580f50bddc3" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.133548 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.134943 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"98b2c028-29f0-40d4-a468-4bfe1e51407c","Type":"ContainerDied","Data":"56731ffcd074ffda62717da64e38e430c5dbdaa593cd146e1c881d7d434c7bd8"} Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.134991 4888 scope.go:117] "RemoveContainer" containerID="c046024e81f883856f34a082aaaafea9a787ec2cd4fbf2411d563df7d4247359" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.136628 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7dd84dc557-97qj7" event={"ID":"2fc1c76f-dc92-49a3-a5fa-9537a814eb82","Type":"ContainerDied","Data":"48b66a5c1f11f5dbeacbaff65b1365991c856a0663c7f4df633c2677fed23a36"} Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.136652 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48b66a5c1f11f5dbeacbaff65b1365991c856a0663c7f4df633c2677fed23a36" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.177053 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.177404 4888 scope.go:117] "RemoveContainer" containerID="b64561bff123203328b4c1331ac37c8731ab29f517398b5ad576f9f2034fcb93" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.210761 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.211993 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.265924 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljtx5\" (UniqueName: \"kubernetes.io/projected/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-kube-api-access-ljtx5\") pod \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.265975 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-config-data\") pod \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266030 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-logs\") pod \"98b2c028-29f0-40d4-a468-4bfe1e51407c\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266118 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-httpd-run\") pod \"98b2c028-29f0-40d4-a468-4bfe1e51407c\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266166 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-combined-ca-bundle\") pod \"98b2c028-29f0-40d4-a468-4bfe1e51407c\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266226 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-scripts\") pod \"98b2c028-29f0-40d4-a468-4bfe1e51407c\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266269 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-horizon-secret-key\") pod \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266290 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ms79\" (UniqueName: \"kubernetes.io/projected/98b2c028-29f0-40d4-a468-4bfe1e51407c-kube-api-access-6ms79\") pod \"98b2c028-29f0-40d4-a468-4bfe1e51407c\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266320 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-logs\") pod \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266343 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-scripts\") pod \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\" (UID: \"2fc1c76f-dc92-49a3-a5fa-9537a814eb82\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266408 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"98b2c028-29f0-40d4-a468-4bfe1e51407c\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266446 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-config-data\") pod \"98b2c028-29f0-40d4-a468-4bfe1e51407c\" (UID: \"98b2c028-29f0-40d4-a468-4bfe1e51407c\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.266647 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-logs" (OuterVolumeSpecName: "logs") pod "98b2c028-29f0-40d4-a468-4bfe1e51407c" (UID: "98b2c028-29f0-40d4-a468-4bfe1e51407c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.267381 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.268081 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-config-data" (OuterVolumeSpecName: "config-data") pod "2fc1c76f-dc92-49a3-a5fa-9537a814eb82" (UID: "2fc1c76f-dc92-49a3-a5fa-9537a814eb82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.268546 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-logs" (OuterVolumeSpecName: "logs") pod "2fc1c76f-dc92-49a3-a5fa-9537a814eb82" (UID: "2fc1c76f-dc92-49a3-a5fa-9537a814eb82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.268917 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-scripts" (OuterVolumeSpecName: "scripts") pod "2fc1c76f-dc92-49a3-a5fa-9537a814eb82" (UID: "2fc1c76f-dc92-49a3-a5fa-9537a814eb82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.274172 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "98b2c028-29f0-40d4-a468-4bfe1e51407c" (UID: "98b2c028-29f0-40d4-a468-4bfe1e51407c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.275078 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-scripts" (OuterVolumeSpecName: "scripts") pod "98b2c028-29f0-40d4-a468-4bfe1e51407c" (UID: "98b2c028-29f0-40d4-a468-4bfe1e51407c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.277606 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-kube-api-access-ljtx5" (OuterVolumeSpecName: "kube-api-access-ljtx5") pod "2fc1c76f-dc92-49a3-a5fa-9537a814eb82" (UID: "2fc1c76f-dc92-49a3-a5fa-9537a814eb82"). InnerVolumeSpecName "kube-api-access-ljtx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.279304 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "98b2c028-29f0-40d4-a468-4bfe1e51407c" (UID: "98b2c028-29f0-40d4-a468-4bfe1e51407c"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.281148 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b2c028-29f0-40d4-a468-4bfe1e51407c-kube-api-access-6ms79" (OuterVolumeSpecName: "kube-api-access-6ms79") pod "98b2c028-29f0-40d4-a468-4bfe1e51407c" (UID: "98b2c028-29f0-40d4-a468-4bfe1e51407c"). InnerVolumeSpecName "kube-api-access-6ms79". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.290515 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2fc1c76f-dc92-49a3-a5fa-9537a814eb82" (UID: "2fc1c76f-dc92-49a3-a5fa-9537a814eb82"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.314060 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98b2c028-29f0-40d4-a468-4bfe1e51407c" (UID: "98b2c028-29f0-40d4-a468-4bfe1e51407c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.326707 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-h6d7m"] Oct 06 15:17:54 crc kubenswrapper[4888]: W1006 15:17:54.341940 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaf441af_cd19_416e_9759_8634523c0979.slice/crio-70ec273957b7c87730239d59de5e94672ceeb9cd833e59ab5f4023495ca2d6af WatchSource:0}: Error finding container 70ec273957b7c87730239d59de5e94672ceeb9cd833e59ab5f4023495ca2d6af: Status 404 returned error can't find the container with id 70ec273957b7c87730239d59de5e94672ceeb9cd833e59ab5f4023495ca2d6af Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369504 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-nb\") pod \"f107030a-25fb-4025-a397-54c4e90b3a60\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369541 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmhzs\" (UniqueName: \"kubernetes.io/projected/f107030a-25fb-4025-a397-54c4e90b3a60-kube-api-access-bmhzs\") pod \"f107030a-25fb-4025-a397-54c4e90b3a60\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369564 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-combined-ca-bundle\") pod \"70502939-44bc-4e73-933d-22ad0fa1657d\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369661 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-dns-svc\") pod \"f107030a-25fb-4025-a397-54c4e90b3a60\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369683 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-config-data\") pod \"70502939-44bc-4e73-933d-22ad0fa1657d\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369708 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh7qd\" (UniqueName: \"kubernetes.io/projected/70502939-44bc-4e73-933d-22ad0fa1657d-kube-api-access-kh7qd\") pod \"70502939-44bc-4e73-933d-22ad0fa1657d\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369731 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-config\") pod \"f107030a-25fb-4025-a397-54c4e90b3a60\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369778 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"70502939-44bc-4e73-933d-22ad0fa1657d\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369899 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-sb\") pod \"f107030a-25fb-4025-a397-54c4e90b3a60\" (UID: \"f107030a-25fb-4025-a397-54c4e90b3a60\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369968 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-httpd-run\") pod \"70502939-44bc-4e73-933d-22ad0fa1657d\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.369984 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-scripts\") pod \"70502939-44bc-4e73-933d-22ad0fa1657d\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370036 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-logs\") pod \"70502939-44bc-4e73-933d-22ad0fa1657d\" (UID: \"70502939-44bc-4e73-933d-22ad0fa1657d\") " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370427 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljtx5\" (UniqueName: \"kubernetes.io/projected/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-kube-api-access-ljtx5\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370441 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370449 4888 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/98b2c028-29f0-40d4-a468-4bfe1e51407c-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370459 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370467 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370475 4888 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370483 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ms79\" (UniqueName: \"kubernetes.io/projected/98b2c028-29f0-40d4-a468-4bfe1e51407c-kube-api-access-6ms79\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370491 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370499 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2fc1c76f-dc92-49a3-a5fa-9537a814eb82-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370516 4888 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.370428 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "70502939-44bc-4e73-933d-22ad0fa1657d" (UID: "70502939-44bc-4e73-933d-22ad0fa1657d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.373721 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-logs" (OuterVolumeSpecName: "logs") pod "70502939-44bc-4e73-933d-22ad0fa1657d" (UID: "70502939-44bc-4e73-933d-22ad0fa1657d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.376591 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "70502939-44bc-4e73-933d-22ad0fa1657d" (UID: "70502939-44bc-4e73-933d-22ad0fa1657d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.377276 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f107030a-25fb-4025-a397-54c4e90b3a60-kube-api-access-bmhzs" (OuterVolumeSpecName: "kube-api-access-bmhzs") pod "f107030a-25fb-4025-a397-54c4e90b3a60" (UID: "f107030a-25fb-4025-a397-54c4e90b3a60"). InnerVolumeSpecName "kube-api-access-bmhzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.382218 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-knz4q"] Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.384668 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70502939-44bc-4e73-933d-22ad0fa1657d-kube-api-access-kh7qd" (OuterVolumeSpecName: "kube-api-access-kh7qd") pod "70502939-44bc-4e73-933d-22ad0fa1657d" (UID: "70502939-44bc-4e73-933d-22ad0fa1657d"). InnerVolumeSpecName "kube-api-access-kh7qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.397307 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-scripts" (OuterVolumeSpecName: "scripts") pod "70502939-44bc-4e73-933d-22ad0fa1657d" (UID: "70502939-44bc-4e73-933d-22ad0fa1657d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.412971 4888 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.473127 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jxszs"] Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.476027 4888 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.476062 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.476074 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70502939-44bc-4e73-933d-22ad0fa1657d-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.476084 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmhzs\" (UniqueName: \"kubernetes.io/projected/f107030a-25fb-4025-a397-54c4e90b3a60-kube-api-access-bmhzs\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.476098 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh7qd\" (UniqueName: \"kubernetes.io/projected/70502939-44bc-4e73-933d-22ad0fa1657d-kube-api-access-kh7qd\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.476109 4888 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.476136 4888 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.486736 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6588c6d648-nnxsr"] Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.495835 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ffc855b96-nhf9w"] Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.504291 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-config-data" (OuterVolumeSpecName: "config-data") pod "98b2c028-29f0-40d4-a468-4bfe1e51407c" (UID: "98b2c028-29f0-40d4-a468-4bfe1e51407c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.513063 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70502939-44bc-4e73-933d-22ad0fa1657d" (UID: "70502939-44bc-4e73-933d-22ad0fa1657d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: W1006 15:17:54.517221 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0574c745_cac5_4deb_87cc_a04c1b09aa9a.slice/crio-96a798007c9526c3fbc0c203fd8cab7b28b649f2e352cd7b53a65107bd1e47a3 WatchSource:0}: Error finding container 96a798007c9526c3fbc0c203fd8cab7b28b649f2e352cd7b53a65107bd1e47a3: Status 404 returned error can't find the container with id 96a798007c9526c3fbc0c203fd8cab7b28b649f2e352cd7b53a65107bd1e47a3 Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.522664 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f107030a-25fb-4025-a397-54c4e90b3a60" (UID: "f107030a-25fb-4025-a397-54c4e90b3a60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.531130 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f107030a-25fb-4025-a397-54c4e90b3a60" (UID: "f107030a-25fb-4025-a397-54c4e90b3a60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.566461 4888 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.580005 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b2c028-29f0-40d4-a468-4bfe1e51407c-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.580047 4888 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.580056 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.580065 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.580073 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.677230 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-config-data" (OuterVolumeSpecName: "config-data") pod "70502939-44bc-4e73-933d-22ad0fa1657d" (UID: "70502939-44bc-4e73-933d-22ad0fa1657d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.681682 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70502939-44bc-4e73-933d-22ad0fa1657d-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.699275 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-config" (OuterVolumeSpecName: "config") pod "f107030a-25fb-4025-a397-54c4e90b3a60" (UID: "f107030a-25fb-4025-a397-54c4e90b3a60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.711122 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f107030a-25fb-4025-a397-54c4e90b3a60" (UID: "f107030a-25fb-4025-a397-54c4e90b3a60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.783366 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:54 crc kubenswrapper[4888]: I1006 15:17:54.783399 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f107030a-25fb-4025-a397-54c4e90b3a60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.148588 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5rgjm" event={"ID":"840197d6-f6a9-4bfc-9e0b-74328e475532","Type":"ContainerStarted","Data":"0a59a477c2f35050fcedc97ec00c7401faa46c57445db59a68671be281e62d11"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.151176 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.160924 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h6d7m" event={"ID":"caf441af-cd19-416e-9759-8634523c0979","Type":"ContainerStarted","Data":"70ec273957b7c87730239d59de5e94672ceeb9cd833e59ab5f4023495ca2d6af"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.165932 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5rgjm" podStartSLOduration=4.034000665 podStartE2EDuration="23.165915582s" podCreationTimestamp="2025-10-06 15:17:32 +0000 UTC" firstStartedPulling="2025-10-06 15:17:34.58313343 +0000 UTC m=+994.395484148" lastFinishedPulling="2025-10-06 15:17:53.715048347 +0000 UTC m=+1013.527399065" observedRunningTime="2025-10-06 15:17:55.165442218 +0000 UTC m=+1014.977792926" watchObservedRunningTime="2025-10-06 15:17:55.165915582 +0000 UTC m=+1014.978266300" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.168980 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8547c88cc-gnkgd" event={"ID":"e1dd4b83-bf5d-4198-b411-97a5a8d057d6","Type":"ContainerStarted","Data":"e0619f8a94676b6111c8e109feacf5357a60bb004e13e97a5a71f16683579f33"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.169027 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8547c88cc-gnkgd" event={"ID":"e1dd4b83-bf5d-4198-b411-97a5a8d057d6","Type":"ContainerStarted","Data":"b1762979c77cf1f37fa7ad043379403decb7aa8de51b45ef57d197065bde8d52"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.169202 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8547c88cc-gnkgd" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerName="horizon-log" containerID="cri-o://b1762979c77cf1f37fa7ad043379403decb7aa8de51b45ef57d197065bde8d52" gracePeriod=30 Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.169262 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8547c88cc-gnkgd" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerName="horizon" containerID="cri-o://e0619f8a94676b6111c8e109feacf5357a60bb004e13e97a5a71f16683579f33" gracePeriod=30 Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.182193 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jxszs" event={"ID":"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0","Type":"ContainerStarted","Data":"06f2653edf2e174ce457c0f975a8a4e4c21ce25cb78a5494083b9ef86cc52aa6"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.193689 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.210807 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.231491 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6588c6d648-nnxsr" event={"ID":"64852c10-aeb0-424b-a601-0b46718c0fc7","Type":"ContainerStarted","Data":"027edb60b8e445f7bccc4c8ba20fd65ee8975caa895b177f01f8111eb03b2b1b"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.231538 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6588c6d648-nnxsr" event={"ID":"64852c10-aeb0-424b-a601-0b46718c0fc7","Type":"ContainerStarted","Data":"81070b8c2d7301a1cd1806ecf3db661aa0d4d83bb7c10c6d4b458209f0bfc0fe"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.234178 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:55 crc kubenswrapper[4888]: E1006 15:17:55.234674 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerName="glance-httpd" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.234720 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerName="glance-httpd" Oct 06 15:17:55 crc kubenswrapper[4888]: E1006 15:17:55.234744 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e3ab89-3ae3-42ef-b94c-fdc2c205e105" containerName="mariadb-account-create" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.234753 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e3ab89-3ae3-42ef-b94c-fdc2c205e105" containerName="mariadb-account-create" Oct 06 15:17:55 crc kubenswrapper[4888]: E1006 15:17:55.234767 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" containerName="glance-log" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.234777 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" containerName="glance-log" Oct 06 15:17:55 crc kubenswrapper[4888]: E1006 15:17:55.234862 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" containerName="dnsmasq-dns" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.234870 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" containerName="dnsmasq-dns" Oct 06 15:17:55 crc kubenswrapper[4888]: E1006 15:17:55.234882 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" containerName="glance-httpd" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.234887 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" containerName="glance-httpd" Oct 06 15:17:55 crc kubenswrapper[4888]: E1006 15:17:55.234901 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" containerName="init" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.234907 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" containerName="init" Oct 06 15:17:55 crc kubenswrapper[4888]: E1006 15:17:55.234918 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerName="glance-log" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.234924 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerName="glance-log" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.235093 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerName="glance-log" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.235108 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e3ab89-3ae3-42ef-b94c-fdc2c205e105" containerName="mariadb-account-create" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.235123 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" containerName="glance-log" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.235134 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" containerName="glance-httpd" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.235142 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" containerName="dnsmasq-dns" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.235152 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" containerName="glance-httpd" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.236123 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-knz4q" event={"ID":"df1ea537-b6a4-49dc-b215-a6b65ed08933","Type":"ContainerStarted","Data":"dba918561bae064cf2c006b20fab52b6d9ff00dbadd6eab14808d524d3b27987"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.236151 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-knz4q" event={"ID":"df1ea537-b6a4-49dc-b215-a6b65ed08933","Type":"ContainerStarted","Data":"14a3ad3b804e7ff42f43409eaf91ae72efb0be629c7dcafc413bec699f682f43"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.236216 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.239242 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.239290 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.240177 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67678875c7-wxfg8" podUID="a088206b-bb6d-455d-b223-689888a75b1c" containerName="horizon-log" containerID="cri-o://edfc78ea49fc0a966637f77fa374d172d46360082daa30f145a38d5cbaf094c8" gracePeriod=30 Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.240409 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67678875c7-wxfg8" event={"ID":"a088206b-bb6d-455d-b223-689888a75b1c","Type":"ContainerStarted","Data":"f1e0de4d69381cc8e31ccfc576a282f1ca869f8c025dc04bc66480d6c40f1791"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.240526 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67678875c7-wxfg8" event={"ID":"a088206b-bb6d-455d-b223-689888a75b1c","Type":"ContainerStarted","Data":"edfc78ea49fc0a966637f77fa374d172d46360082daa30f145a38d5cbaf094c8"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.240656 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67678875c7-wxfg8" podUID="a088206b-bb6d-455d-b223-689888a75b1c" containerName="horizon" containerID="cri-o://f1e0de4d69381cc8e31ccfc576a282f1ca869f8c025dc04bc66480d6c40f1791" gracePeriod=30 Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.242213 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8547c88cc-gnkgd" podStartSLOduration=4.126286606 podStartE2EDuration="20.242195944s" podCreationTimestamp="2025-10-06 15:17:35 +0000 UTC" firstStartedPulling="2025-10-06 15:17:37.60522651 +0000 UTC m=+997.417577228" lastFinishedPulling="2025-10-06 15:17:53.721135848 +0000 UTC m=+1013.533486566" observedRunningTime="2025-10-06 15:17:55.231348932 +0000 UTC m=+1015.043699660" watchObservedRunningTime="2025-10-06 15:17:55.242195944 +0000 UTC m=+1015.054546662" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.257395 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.259260 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-knz4q" podStartSLOduration=3.259248561 podStartE2EDuration="3.259248561s" podCreationTimestamp="2025-10-06 15:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:55.257354792 +0000 UTC m=+1015.069705510" watchObservedRunningTime="2025-10-06 15:17:55.259248561 +0000 UTC m=+1015.071599279" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.266492 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.266624 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffc855b96-nhf9w" event={"ID":"0574c745-cac5-4deb-87cc-a04c1b09aa9a","Type":"ContainerStarted","Data":"2df7a1bb23460b3296677c8de731421c3f3ab7fd0970a63e26450d729add3c3c"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.266660 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffc855b96-nhf9w" event={"ID":"0574c745-cac5-4deb-87cc-a04c1b09aa9a","Type":"ContainerStarted","Data":"96a798007c9526c3fbc0c203fd8cab7b28b649f2e352cd7b53a65107bd1e47a3"} Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.266764 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-tbcr2" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.267196 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7dd84dc557-97qj7" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.309236 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4hk\" (UniqueName: \"kubernetes.io/projected/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-kube-api-access-md4hk\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.309279 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.309318 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.309363 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-logs\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.309424 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.309469 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.309487 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.309577 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.322602 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67678875c7-wxfg8" podStartSLOduration=3.7678399799999998 podStartE2EDuration="23.322586496s" podCreationTimestamp="2025-10-06 15:17:32 +0000 UTC" firstStartedPulling="2025-10-06 15:17:34.235623904 +0000 UTC m=+994.047974622" lastFinishedPulling="2025-10-06 15:17:53.79037042 +0000 UTC m=+1013.602721138" observedRunningTime="2025-10-06 15:17:55.308154582 +0000 UTC m=+1015.120505310" watchObservedRunningTime="2025-10-06 15:17:55.322586496 +0000 UTC m=+1015.134937214" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.327838 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tbcr2"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.344089 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-tbcr2"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.391756 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.412423 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.413780 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md4hk\" (UniqueName: \"kubernetes.io/projected/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-kube-api-access-md4hk\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.413931 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.414034 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.414140 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-logs\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.414255 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.414398 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.414475 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.414637 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.415141 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.422021 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.423244 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-logs\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.439305 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.441064 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.443992 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.444282 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.447590 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.455009 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.461281 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.473079 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md4hk\" (UniqueName: \"kubernetes.io/projected/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-kube-api-access-md4hk\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.474236 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.480081 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7dd84dc557-97qj7"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.531988 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7dd84dc557-97qj7"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.543873 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.544095 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.544191 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.544398 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-logs\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.544579 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.544679 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8jh\" (UniqueName: \"kubernetes.io/projected/78d84d66-314f-4338-ab86-206b0db9f5b9-kube-api-access-fj8jh\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.544890 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.545111 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.615690 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.649371 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " pod="openstack/glance-default-external-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.721020 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.722361 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.722488 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.722643 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.722809 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-logs\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.722878 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.724198 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-logs\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.724390 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.724991 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8jh\" (UniqueName: \"kubernetes.io/projected/78d84d66-314f-4338-ab86-206b0db9f5b9-kube-api-access-fj8jh\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.725142 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.725620 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.735626 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.741233 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.744396 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.748419 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.754719 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8jh\" (UniqueName: \"kubernetes.io/projected/78d84d66-314f-4338-ab86-206b0db9f5b9-kube-api-access-fj8jh\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.828466 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.846861 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:17:55 crc kubenswrapper[4888]: I1006 15:17:55.868046 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.202687 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.304774 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6588c6d648-nnxsr" event={"ID":"64852c10-aeb0-424b-a601-0b46718c0fc7","Type":"ContainerStarted","Data":"f2496e188292500e18f000d814226014e2e7959fbeabffa8c08913a31b5efbb9"} Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.339211 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffc855b96-nhf9w" event={"ID":"0574c745-cac5-4deb-87cc-a04c1b09aa9a","Type":"ContainerStarted","Data":"81d5e6ec06435d3284bb89303c374e53eb6fe1e13976a3a09a106e21eef6b6f4"} Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.360670 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6588c6d648-nnxsr" podStartSLOduration=10.36064933 podStartE2EDuration="10.36064933s" podCreationTimestamp="2025-10-06 15:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:56.333386122 +0000 UTC m=+1016.145736860" watchObservedRunningTime="2025-10-06 15:17:56.36064933 +0000 UTC m=+1016.173000048" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.385474 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-ffc855b96-nhf9w" podStartSLOduration=10.38544241 podStartE2EDuration="10.38544241s" podCreationTimestamp="2025-10-06 15:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:17:56.378003206 +0000 UTC m=+1016.190353924" watchObservedRunningTime="2025-10-06 15:17:56.38544241 +0000 UTC m=+1016.197793128" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.712963 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.803018 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.803068 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.837700 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.965019 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc1c76f-dc92-49a3-a5fa-9537a814eb82" path="/var/lib/kubelet/pods/2fc1c76f-dc92-49a3-a5fa-9537a814eb82/volumes" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.965466 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70502939-44bc-4e73-933d-22ad0fa1657d" path="/var/lib/kubelet/pods/70502939-44bc-4e73-933d-22ad0fa1657d/volumes" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.966187 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b2c028-29f0-40d4-a468-4bfe1e51407c" path="/var/lib/kubelet/pods/98b2c028-29f0-40d4-a468-4bfe1e51407c/volumes" Oct 06 15:17:56 crc kubenswrapper[4888]: I1006 15:17:56.967289 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f107030a-25fb-4025-a397-54c4e90b3a60" path="/var/lib/kubelet/pods/f107030a-25fb-4025-a397-54c4e90b3a60/volumes" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.119685 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.119734 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.250233 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-j2tl6"] Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.261559 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.265514 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j2tl6"] Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.274386 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pbrvh" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.274894 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.275155 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.356434 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5","Type":"ContainerStarted","Data":"ebfc2c8564d0aff3b6ee2867e1ad4dcb73d31985b149891e4f2c3ce8f2b9c27f"} Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.358929 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ceb4186-79b8-4dc6-b54c-7e0681764d35","Type":"ContainerStarted","Data":"e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24"} Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.361341 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78d84d66-314f-4338-ab86-206b0db9f5b9","Type":"ContainerStarted","Data":"d178010b587b91344ea46d407a450b8acce5d39b088aa61ca49793985a544144"} Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.367874 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-combined-ca-bundle\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.367938 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbd2l\" (UniqueName: \"kubernetes.io/projected/1e36f3be-2b0f-45e6-8275-b66240419057-kube-api-access-wbd2l\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.367999 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-config\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.470538 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-config\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.470705 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-combined-ca-bundle\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.470823 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbd2l\" (UniqueName: \"kubernetes.io/projected/1e36f3be-2b0f-45e6-8275-b66240419057-kube-api-access-wbd2l\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.480053 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-config\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.489066 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-combined-ca-bundle\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.511825 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbd2l\" (UniqueName: \"kubernetes.io/projected/1e36f3be-2b0f-45e6-8275-b66240419057-kube-api-access-wbd2l\") pod \"neutron-db-sync-j2tl6\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:57 crc kubenswrapper[4888]: I1006 15:17:57.618425 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:17:58 crc kubenswrapper[4888]: I1006 15:17:58.380649 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78d84d66-314f-4338-ab86-206b0db9f5b9","Type":"ContainerStarted","Data":"a65899bb3278b329aebbafd8e058151b8de4c4f5b2a405841f698cbcd9a6c1b2"} Oct 06 15:17:58 crc kubenswrapper[4888]: I1006 15:17:58.505519 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j2tl6"] Oct 06 15:17:59 crc kubenswrapper[4888]: I1006 15:17:59.417416 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j2tl6" event={"ID":"1e36f3be-2b0f-45e6-8275-b66240419057","Type":"ContainerStarted","Data":"ff8ada8b8499445176bf9503035a43b07c062e894040a4970355235dccf6cd98"} Oct 06 15:17:59 crc kubenswrapper[4888]: I1006 15:17:59.428531 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5","Type":"ContainerStarted","Data":"ca40c156cdbef7ea93804d2c29173089b45f33f039e87e8028f32b33b004ac5e"} Oct 06 15:18:00 crc kubenswrapper[4888]: I1006 15:18:00.444015 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5","Type":"ContainerStarted","Data":"8f23fa36485dcf01f41482fad1ff980fdbf482b390b2ff18760b42fafd1ee635"} Oct 06 15:18:00 crc kubenswrapper[4888]: I1006 15:18:00.458147 4888 generic.go:334] "Generic (PLEG): container finished" podID="840197d6-f6a9-4bfc-9e0b-74328e475532" containerID="0a59a477c2f35050fcedc97ec00c7401faa46c57445db59a68671be281e62d11" exitCode=0 Oct 06 15:18:00 crc kubenswrapper[4888]: I1006 15:18:00.458213 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5rgjm" event={"ID":"840197d6-f6a9-4bfc-9e0b-74328e475532","Type":"ContainerDied","Data":"0a59a477c2f35050fcedc97ec00c7401faa46c57445db59a68671be281e62d11"} Oct 06 15:18:00 crc kubenswrapper[4888]: I1006 15:18:00.465135 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78d84d66-314f-4338-ab86-206b0db9f5b9","Type":"ContainerStarted","Data":"da15d24284cb0c8bc8f729e022d0e8ba2eb02f6fb6cfd2c76a663fad85f1d379"} Oct 06 15:18:00 crc kubenswrapper[4888]: I1006 15:18:00.476313 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.476300092 podStartE2EDuration="5.476300092s" podCreationTimestamp="2025-10-06 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:00.466877726 +0000 UTC m=+1020.279228464" watchObservedRunningTime="2025-10-06 15:18:00.476300092 +0000 UTC m=+1020.288650810" Oct 06 15:18:00 crc kubenswrapper[4888]: I1006 15:18:00.484317 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j2tl6" event={"ID":"1e36f3be-2b0f-45e6-8275-b66240419057","Type":"ContainerStarted","Data":"ac33590e39ab8ecff2307075d973a663542b0a3e362e88171088635fbad92126"} Oct 06 15:18:00 crc kubenswrapper[4888]: I1006 15:18:00.501873 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.501851877 podStartE2EDuration="5.501851877s" podCreationTimestamp="2025-10-06 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:00.492884354 +0000 UTC m=+1020.305235072" watchObservedRunningTime="2025-10-06 15:18:00.501851877 +0000 UTC m=+1020.314202615" Oct 06 15:18:00 crc kubenswrapper[4888]: I1006 15:18:00.959677 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-j2tl6" podStartSLOduration=3.959660445 podStartE2EDuration="3.959660445s" podCreationTimestamp="2025-10-06 15:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:00.534666771 +0000 UTC m=+1020.347017489" watchObservedRunningTime="2025-10-06 15:18:00.959660445 +0000 UTC m=+1020.772011163" Oct 06 15:18:01 crc kubenswrapper[4888]: I1006 15:18:01.944742 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5rgjm" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.049273 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/840197d6-f6a9-4bfc-9e0b-74328e475532-logs\") pod \"840197d6-f6a9-4bfc-9e0b-74328e475532\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.049338 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-combined-ca-bundle\") pod \"840197d6-f6a9-4bfc-9e0b-74328e475532\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.049403 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nhhv\" (UniqueName: \"kubernetes.io/projected/840197d6-f6a9-4bfc-9e0b-74328e475532-kube-api-access-7nhhv\") pod \"840197d6-f6a9-4bfc-9e0b-74328e475532\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.049432 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-scripts\") pod \"840197d6-f6a9-4bfc-9e0b-74328e475532\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.049490 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-config-data\") pod \"840197d6-f6a9-4bfc-9e0b-74328e475532\" (UID: \"840197d6-f6a9-4bfc-9e0b-74328e475532\") " Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.050232 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/840197d6-f6a9-4bfc-9e0b-74328e475532-logs" (OuterVolumeSpecName: "logs") pod "840197d6-f6a9-4bfc-9e0b-74328e475532" (UID: "840197d6-f6a9-4bfc-9e0b-74328e475532"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.050700 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/840197d6-f6a9-4bfc-9e0b-74328e475532-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.060899 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-scripts" (OuterVolumeSpecName: "scripts") pod "840197d6-f6a9-4bfc-9e0b-74328e475532" (UID: "840197d6-f6a9-4bfc-9e0b-74328e475532"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.076634 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840197d6-f6a9-4bfc-9e0b-74328e475532-kube-api-access-7nhhv" (OuterVolumeSpecName: "kube-api-access-7nhhv") pod "840197d6-f6a9-4bfc-9e0b-74328e475532" (UID: "840197d6-f6a9-4bfc-9e0b-74328e475532"). InnerVolumeSpecName "kube-api-access-7nhhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.103759 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-config-data" (OuterVolumeSpecName: "config-data") pod "840197d6-f6a9-4bfc-9e0b-74328e475532" (UID: "840197d6-f6a9-4bfc-9e0b-74328e475532"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.153704 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nhhv\" (UniqueName: \"kubernetes.io/projected/840197d6-f6a9-4bfc-9e0b-74328e475532-kube-api-access-7nhhv\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.153759 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.153829 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.180220 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "840197d6-f6a9-4bfc-9e0b-74328e475532" (UID: "840197d6-f6a9-4bfc-9e0b-74328e475532"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.259291 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840197d6-f6a9-4bfc-9e0b-74328e475532-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.516576 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5rgjm" event={"ID":"840197d6-f6a9-4bfc-9e0b-74328e475532","Type":"ContainerDied","Data":"0465f400fdd39c97546318dd578fffdf79700f02bf21c679f518283d0e748963"} Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.516922 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0465f400fdd39c97546318dd578fffdf79700f02bf21c679f518283d0e748963" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.516678 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5rgjm" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.521966 4888 generic.go:334] "Generic (PLEG): container finished" podID="df1ea537-b6a4-49dc-b215-a6b65ed08933" containerID="dba918561bae064cf2c006b20fab52b6d9ff00dbadd6eab14808d524d3b27987" exitCode=0 Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.522032 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-knz4q" event={"ID":"df1ea537-b6a4-49dc-b215-a6b65ed08933","Type":"ContainerDied","Data":"dba918561bae064cf2c006b20fab52b6d9ff00dbadd6eab14808d524d3b27987"} Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.716370 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9885565f4-9j2hk"] Oct 06 15:18:02 crc kubenswrapper[4888]: E1006 15:18:02.716755 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840197d6-f6a9-4bfc-9e0b-74328e475532" containerName="placement-db-sync" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.716773 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="840197d6-f6a9-4bfc-9e0b-74328e475532" containerName="placement-db-sync" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.716977 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="840197d6-f6a9-4bfc-9e0b-74328e475532" containerName="placement-db-sync" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.718013 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.734178 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dtkhf" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.734294 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.734455 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.734599 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.735063 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.743604 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9885565f4-9j2hk"] Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.870848 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-config-data\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.870893 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-combined-ca-bundle\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.870997 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-internal-tls-certs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.871020 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnbk\" (UniqueName: \"kubernetes.io/projected/3fe4a962-ac46-4084-a054-4b7499863e84-kube-api-access-9gnbk\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.871069 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-public-tls-certs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.871107 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-scripts\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.871127 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4a962-ac46-4084-a054-4b7499863e84-logs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.972097 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4a962-ac46-4084-a054-4b7499863e84-logs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.972172 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-config-data\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.972193 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-combined-ca-bundle\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.972244 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-internal-tls-certs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.972270 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gnbk\" (UniqueName: \"kubernetes.io/projected/3fe4a962-ac46-4084-a054-4b7499863e84-kube-api-access-9gnbk\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.972317 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-public-tls-certs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.972352 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-scripts\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.972560 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4a962-ac46-4084-a054-4b7499863e84-logs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.982642 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-internal-tls-certs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.982692 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-config-data\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.983600 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-scripts\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.989433 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-public-tls-certs\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:02 crc kubenswrapper[4888]: I1006 15:18:02.998374 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4a962-ac46-4084-a054-4b7499863e84-combined-ca-bundle\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:03 crc kubenswrapper[4888]: I1006 15:18:03.012232 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gnbk\" (UniqueName: \"kubernetes.io/projected/3fe4a962-ac46-4084-a054-4b7499863e84-kube-api-access-9gnbk\") pod \"placement-9885565f4-9j2hk\" (UID: \"3fe4a962-ac46-4084-a054-4b7499863e84\") " pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:03 crc kubenswrapper[4888]: I1006 15:18:03.063791 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:18:03 crc kubenswrapper[4888]: I1006 15:18:03.083461 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:05 crc kubenswrapper[4888]: I1006 15:18:05.847786 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 15:18:05 crc kubenswrapper[4888]: I1006 15:18:05.848460 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 15:18:05 crc kubenswrapper[4888]: I1006 15:18:05.868963 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 15:18:05 crc kubenswrapper[4888]: I1006 15:18:05.869011 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 15:18:05 crc kubenswrapper[4888]: I1006 15:18:05.931023 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 15:18:05 crc kubenswrapper[4888]: I1006 15:18:05.976229 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 15:18:05 crc kubenswrapper[4888]: I1006 15:18:05.976317 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 15:18:05 crc kubenswrapper[4888]: I1006 15:18:05.991882 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 15:18:06 crc kubenswrapper[4888]: I1006 15:18:06.568634 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 15:18:06 crc kubenswrapper[4888]: I1006 15:18:06.568939 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 15:18:06 crc kubenswrapper[4888]: I1006 15:18:06.570270 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 15:18:06 crc kubenswrapper[4888]: I1006 15:18:06.570379 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 15:18:06 crc kubenswrapper[4888]: I1006 15:18:06.805286 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.020075 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.065468 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-scripts\") pod \"df1ea537-b6a4-49dc-b215-a6b65ed08933\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.065786 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-credential-keys\") pod \"df1ea537-b6a4-49dc-b215-a6b65ed08933\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.065860 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-config-data\") pod \"df1ea537-b6a4-49dc-b215-a6b65ed08933\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.065949 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-combined-ca-bundle\") pod \"df1ea537-b6a4-49dc-b215-a6b65ed08933\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.065992 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-fernet-keys\") pod \"df1ea537-b6a4-49dc-b215-a6b65ed08933\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.066072 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85r56\" (UniqueName: \"kubernetes.io/projected/df1ea537-b6a4-49dc-b215-a6b65ed08933-kube-api-access-85r56\") pod \"df1ea537-b6a4-49dc-b215-a6b65ed08933\" (UID: \"df1ea537-b6a4-49dc-b215-a6b65ed08933\") " Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.077945 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1ea537-b6a4-49dc-b215-a6b65ed08933-kube-api-access-85r56" (OuterVolumeSpecName: "kube-api-access-85r56") pod "df1ea537-b6a4-49dc-b215-a6b65ed08933" (UID: "df1ea537-b6a4-49dc-b215-a6b65ed08933"). InnerVolumeSpecName "kube-api-access-85r56". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.078058 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-scripts" (OuterVolumeSpecName: "scripts") pod "df1ea537-b6a4-49dc-b215-a6b65ed08933" (UID: "df1ea537-b6a4-49dc-b215-a6b65ed08933"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.083959 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "df1ea537-b6a4-49dc-b215-a6b65ed08933" (UID: "df1ea537-b6a4-49dc-b215-a6b65ed08933"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.086967 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "df1ea537-b6a4-49dc-b215-a6b65ed08933" (UID: "df1ea537-b6a4-49dc-b215-a6b65ed08933"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.125944 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6588c6d648-nnxsr" podUID="64852c10-aeb0-424b-a601-0b46718c0fc7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.137069 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df1ea537-b6a4-49dc-b215-a6b65ed08933" (UID: "df1ea537-b6a4-49dc-b215-a6b65ed08933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.168736 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.169024 4888 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.169098 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.169157 4888 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.169212 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85r56\" (UniqueName: \"kubernetes.io/projected/df1ea537-b6a4-49dc-b215-a6b65ed08933-kube-api-access-85r56\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.192958 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-config-data" (OuterVolumeSpecName: "config-data") pod "df1ea537-b6a4-49dc-b215-a6b65ed08933" (UID: "df1ea537-b6a4-49dc-b215-a6b65ed08933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.270929 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1ea537-b6a4-49dc-b215-a6b65ed08933-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.589207 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-knz4q" Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.594993 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-knz4q" event={"ID":"df1ea537-b6a4-49dc-b215-a6b65ed08933","Type":"ContainerDied","Data":"14a3ad3b804e7ff42f43409eaf91ae72efb0be629c7dcafc413bec699f682f43"} Oct 06 15:18:07 crc kubenswrapper[4888]: I1006 15:18:07.595081 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14a3ad3b804e7ff42f43409eaf91ae72efb0be629c7dcafc413bec699f682f43" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.151724 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-598ff49d66-g94f6"] Oct 06 15:18:08 crc kubenswrapper[4888]: E1006 15:18:08.152432 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1ea537-b6a4-49dc-b215-a6b65ed08933" containerName="keystone-bootstrap" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.152447 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1ea537-b6a4-49dc-b215-a6b65ed08933" containerName="keystone-bootstrap" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.152620 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1ea537-b6a4-49dc-b215-a6b65ed08933" containerName="keystone-bootstrap" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.153225 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.160401 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.160781 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.160987 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-pj4xj" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.161146 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.161324 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.175711 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.180677 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-598ff49d66-g94f6"] Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.291451 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6v28\" (UniqueName: \"kubernetes.io/projected/5a39a6de-4690-4845-af62-7cb05de93909-kube-api-access-k6v28\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.291585 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-scripts\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.291624 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-credential-keys\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.291670 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-internal-tls-certs\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.291701 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-fernet-keys\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.291754 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-config-data\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.291784 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-combined-ca-bundle\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.291828 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-public-tls-certs\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.392841 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-internal-tls-certs\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.392892 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-fernet-keys\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.392937 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-config-data\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.392956 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-combined-ca-bundle\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.392971 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-public-tls-certs\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.393028 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6v28\" (UniqueName: \"kubernetes.io/projected/5a39a6de-4690-4845-af62-7cb05de93909-kube-api-access-k6v28\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.393074 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-scripts\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.393093 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-credential-keys\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.402323 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-public-tls-certs\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.402367 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-combined-ca-bundle\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.404299 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-internal-tls-certs\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.404473 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-scripts\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.416925 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-fernet-keys\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.421531 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-config-data\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.426870 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6v28\" (UniqueName: \"kubernetes.io/projected/5a39a6de-4690-4845-af62-7cb05de93909-kube-api-access-k6v28\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.433038 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5a39a6de-4690-4845-af62-7cb05de93909-credential-keys\") pod \"keystone-598ff49d66-g94f6\" (UID: \"5a39a6de-4690-4845-af62-7cb05de93909\") " pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:08 crc kubenswrapper[4888]: I1006 15:18:08.487195 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:13 crc kubenswrapper[4888]: I1006 15:18:13.631305 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 15:18:13 crc kubenswrapper[4888]: I1006 15:18:13.631781 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:18:13 crc kubenswrapper[4888]: I1006 15:18:13.644095 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 15:18:13 crc kubenswrapper[4888]: I1006 15:18:13.724675 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 15:18:13 crc kubenswrapper[4888]: I1006 15:18:13.724810 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:18:13 crc kubenswrapper[4888]: I1006 15:18:13.854395 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 15:18:16 crc kubenswrapper[4888]: I1006 15:18:16.803593 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Oct 06 15:18:17 crc kubenswrapper[4888]: I1006 15:18:17.113972 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6588c6d648-nnxsr" podUID="64852c10-aeb0-424b-a601-0b46718c0fc7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Oct 06 15:18:25 crc kubenswrapper[4888]: I1006 15:18:25.783332 4888 generic.go:334] "Generic (PLEG): container finished" podID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerID="e0619f8a94676b6111c8e109feacf5357a60bb004e13e97a5a71f16683579f33" exitCode=137 Oct 06 15:18:25 crc kubenswrapper[4888]: I1006 15:18:25.783834 4888 generic.go:334] "Generic (PLEG): container finished" podID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerID="b1762979c77cf1f37fa7ad043379403decb7aa8de51b45ef57d197065bde8d52" exitCode=137 Oct 06 15:18:25 crc kubenswrapper[4888]: I1006 15:18:25.783399 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8547c88cc-gnkgd" event={"ID":"e1dd4b83-bf5d-4198-b411-97a5a8d057d6","Type":"ContainerDied","Data":"e0619f8a94676b6111c8e109feacf5357a60bb004e13e97a5a71f16683579f33"} Oct 06 15:18:25 crc kubenswrapper[4888]: I1006 15:18:25.783900 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8547c88cc-gnkgd" event={"ID":"e1dd4b83-bf5d-4198-b411-97a5a8d057d6","Type":"ContainerDied","Data":"b1762979c77cf1f37fa7ad043379403decb7aa8de51b45ef57d197065bde8d52"} Oct 06 15:18:25 crc kubenswrapper[4888]: I1006 15:18:25.786474 4888 generic.go:334] "Generic (PLEG): container finished" podID="a088206b-bb6d-455d-b223-689888a75b1c" containerID="f1e0de4d69381cc8e31ccfc576a282f1ca869f8c025dc04bc66480d6c40f1791" exitCode=137 Oct 06 15:18:25 crc kubenswrapper[4888]: I1006 15:18:25.786503 4888 generic.go:334] "Generic (PLEG): container finished" podID="a088206b-bb6d-455d-b223-689888a75b1c" containerID="edfc78ea49fc0a966637f77fa374d172d46360082daa30f145a38d5cbaf094c8" exitCode=137 Oct 06 15:18:25 crc kubenswrapper[4888]: I1006 15:18:25.786522 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67678875c7-wxfg8" event={"ID":"a088206b-bb6d-455d-b223-689888a75b1c","Type":"ContainerDied","Data":"f1e0de4d69381cc8e31ccfc576a282f1ca869f8c025dc04bc66480d6c40f1791"} Oct 06 15:18:25 crc kubenswrapper[4888]: I1006 15:18:25.786547 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67678875c7-wxfg8" event={"ID":"a088206b-bb6d-455d-b223-689888a75b1c","Type":"ContainerDied","Data":"edfc78ea49fc0a966637f77fa374d172d46360082daa30f145a38d5cbaf094c8"} Oct 06 15:18:29 crc kubenswrapper[4888]: E1006 15:18:29.602630 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Oct 06 15:18:29 crc kubenswrapper[4888]: E1006 15:18:29.603411 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmgvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2ceb4186-79b8-4dc6-b54c-7e0681764d35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:18:29 crc kubenswrapper[4888]: I1006 15:18:29.821201 4888 generic.go:334] "Generic (PLEG): container finished" podID="1e36f3be-2b0f-45e6-8275-b66240419057" containerID="ac33590e39ab8ecff2307075d973a663542b0a3e362e88171088635fbad92126" exitCode=0 Oct 06 15:18:29 crc kubenswrapper[4888]: I1006 15:18:29.821293 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j2tl6" event={"ID":"1e36f3be-2b0f-45e6-8275-b66240419057","Type":"ContainerDied","Data":"ac33590e39ab8ecff2307075d973a663542b0a3e362e88171088635fbad92126"} Oct 06 15:18:30 crc kubenswrapper[4888]: E1006 15:18:30.728499 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 06 15:18:30 crc kubenswrapper[4888]: E1006 15:18:30.728694 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tzpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-h6d7m_openstack(caf441af-cd19-416e-9759-8634523c0979): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:18:30 crc kubenswrapper[4888]: E1006 15:18:30.729915 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-h6d7m" podUID="caf441af-cd19-416e-9759-8634523c0979" Oct 06 15:18:30 crc kubenswrapper[4888]: E1006 15:18:30.849695 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-h6d7m" podUID="caf441af-cd19-416e-9759-8634523c0979" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.048333 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.071386 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.095064 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.173950 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a088206b-bb6d-455d-b223-689888a75b1c-horizon-secret-key\") pod \"a088206b-bb6d-455d-b223-689888a75b1c\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.174536 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-scripts\") pod \"a088206b-bb6d-455d-b223-689888a75b1c\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.174608 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs8db\" (UniqueName: \"kubernetes.io/projected/a088206b-bb6d-455d-b223-689888a75b1c-kube-api-access-fs8db\") pod \"a088206b-bb6d-455d-b223-689888a75b1c\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.174707 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a088206b-bb6d-455d-b223-689888a75b1c-logs\") pod \"a088206b-bb6d-455d-b223-689888a75b1c\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.175735 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-config-data\") pod \"a088206b-bb6d-455d-b223-689888a75b1c\" (UID: \"a088206b-bb6d-455d-b223-689888a75b1c\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.179638 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a088206b-bb6d-455d-b223-689888a75b1c-logs" (OuterVolumeSpecName: "logs") pod "a088206b-bb6d-455d-b223-689888a75b1c" (UID: "a088206b-bb6d-455d-b223-689888a75b1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.186283 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a088206b-bb6d-455d-b223-689888a75b1c-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.199820 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-scripts" (OuterVolumeSpecName: "scripts") pod "a088206b-bb6d-455d-b223-689888a75b1c" (UID: "a088206b-bb6d-455d-b223-689888a75b1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.202397 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a088206b-bb6d-455d-b223-689888a75b1c-kube-api-access-fs8db" (OuterVolumeSpecName: "kube-api-access-fs8db") pod "a088206b-bb6d-455d-b223-689888a75b1c" (UID: "a088206b-bb6d-455d-b223-689888a75b1c"). InnerVolumeSpecName "kube-api-access-fs8db". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.212981 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a088206b-bb6d-455d-b223-689888a75b1c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a088206b-bb6d-455d-b223-689888a75b1c" (UID: "a088206b-bb6d-455d-b223-689888a75b1c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.241221 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-config-data" (OuterVolumeSpecName: "config-data") pod "a088206b-bb6d-455d-b223-689888a75b1c" (UID: "a088206b-bb6d-455d-b223-689888a75b1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.288491 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.288529 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs8db\" (UniqueName: \"kubernetes.io/projected/a088206b-bb6d-455d-b223-689888a75b1c-kube-api-access-fs8db\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.288542 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a088206b-bb6d-455d-b223-689888a75b1c-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.288555 4888 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a088206b-bb6d-455d-b223-689888a75b1c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.330363 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.350713 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.487919 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9885565f4-9j2hk"] Oct 06 15:18:31 crc kubenswrapper[4888]: W1006 15:18:31.490373 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fe4a962_ac46_4084_a054_4b7499863e84.slice/crio-dbe7aa8451debe3d70adad8c2e845cb06ffd0db20d733b220babec24524216ec WatchSource:0}: Error finding container dbe7aa8451debe3d70adad8c2e845cb06ffd0db20d733b220babec24524216ec: Status 404 returned error can't find the container with id dbe7aa8451debe3d70adad8c2e845cb06ffd0db20d733b220babec24524216ec Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.494525 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-combined-ca-bundle\") pod \"1e36f3be-2b0f-45e6-8275-b66240419057\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.494589 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbd2l\" (UniqueName: \"kubernetes.io/projected/1e36f3be-2b0f-45e6-8275-b66240419057-kube-api-access-wbd2l\") pod \"1e36f3be-2b0f-45e6-8275-b66240419057\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.494688 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-scripts\") pod \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.494764 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-config-data\") pod \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.494839 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bhg2\" (UniqueName: \"kubernetes.io/projected/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-kube-api-access-2bhg2\") pod \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.494868 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-config\") pod \"1e36f3be-2b0f-45e6-8275-b66240419057\" (UID: \"1e36f3be-2b0f-45e6-8275-b66240419057\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.494904 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-logs\") pod \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.494923 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-horizon-secret-key\") pod \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\" (UID: \"e1dd4b83-bf5d-4198-b411-97a5a8d057d6\") " Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.496132 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-598ff49d66-g94f6"] Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.499155 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e1dd4b83-bf5d-4198-b411-97a5a8d057d6" (UID: "e1dd4b83-bf5d-4198-b411-97a5a8d057d6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.499458 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-logs" (OuterVolumeSpecName: "logs") pod "e1dd4b83-bf5d-4198-b411-97a5a8d057d6" (UID: "e1dd4b83-bf5d-4198-b411-97a5a8d057d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.508192 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e36f3be-2b0f-45e6-8275-b66240419057-kube-api-access-wbd2l" (OuterVolumeSpecName: "kube-api-access-wbd2l") pod "1e36f3be-2b0f-45e6-8275-b66240419057" (UID: "1e36f3be-2b0f-45e6-8275-b66240419057"). InnerVolumeSpecName "kube-api-access-wbd2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.508283 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-kube-api-access-2bhg2" (OuterVolumeSpecName: "kube-api-access-2bhg2") pod "e1dd4b83-bf5d-4198-b411-97a5a8d057d6" (UID: "e1dd4b83-bf5d-4198-b411-97a5a8d057d6"). InnerVolumeSpecName "kube-api-access-2bhg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.542419 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-scripts" (OuterVolumeSpecName: "scripts") pod "e1dd4b83-bf5d-4198-b411-97a5a8d057d6" (UID: "e1dd4b83-bf5d-4198-b411-97a5a8d057d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.551172 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-config" (OuterVolumeSpecName: "config") pod "1e36f3be-2b0f-45e6-8275-b66240419057" (UID: "1e36f3be-2b0f-45e6-8275-b66240419057"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.557834 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-config-data" (OuterVolumeSpecName: "config-data") pod "e1dd4b83-bf5d-4198-b411-97a5a8d057d6" (UID: "e1dd4b83-bf5d-4198-b411-97a5a8d057d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.558009 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e36f3be-2b0f-45e6-8275-b66240419057" (UID: "1e36f3be-2b0f-45e6-8275-b66240419057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.596713 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bhg2\" (UniqueName: \"kubernetes.io/projected/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-kube-api-access-2bhg2\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.596752 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.596765 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.596779 4888 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.596809 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e36f3be-2b0f-45e6-8275-b66240419057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.596822 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbd2l\" (UniqueName: \"kubernetes.io/projected/1e36f3be-2b0f-45e6-8275-b66240419057-kube-api-access-wbd2l\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.596834 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.596845 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1dd4b83-bf5d-4198-b411-97a5a8d057d6-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.861851 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9885565f4-9j2hk" event={"ID":"3fe4a962-ac46-4084-a054-4b7499863e84","Type":"ContainerStarted","Data":"e79b4f163879c6c1971a098f8696479083bdd56794adb9cd65261c5bd6d0718c"} Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.862156 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9885565f4-9j2hk" event={"ID":"3fe4a962-ac46-4084-a054-4b7499863e84","Type":"ContainerStarted","Data":"dbe7aa8451debe3d70adad8c2e845cb06ffd0db20d733b220babec24524216ec"} Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.869063 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j2tl6" event={"ID":"1e36f3be-2b0f-45e6-8275-b66240419057","Type":"ContainerDied","Data":"ff8ada8b8499445176bf9503035a43b07c062e894040a4970355235dccf6cd98"} Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.869111 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8ada8b8499445176bf9503035a43b07c062e894040a4970355235dccf6cd98" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.869194 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j2tl6" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.882925 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8547c88cc-gnkgd" event={"ID":"e1dd4b83-bf5d-4198-b411-97a5a8d057d6","Type":"ContainerDied","Data":"130eba8e2a69c54462e8ad4945fc6ecaab8c85bd38875f88432afebe1e627890"} Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.890741 4888 scope.go:117] "RemoveContainer" containerID="e0619f8a94676b6111c8e109feacf5357a60bb004e13e97a5a71f16683579f33" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.886216 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8547c88cc-gnkgd" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.898075 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67678875c7-wxfg8" event={"ID":"a088206b-bb6d-455d-b223-689888a75b1c","Type":"ContainerDied","Data":"35c2019ceb11a325bea0d279b64a953776d7445a5cad5fd7c401e2a2e05e922f"} Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.898211 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67678875c7-wxfg8" Oct 06 15:18:31 crc kubenswrapper[4888]: I1006 15:18:31.953603 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jxszs" event={"ID":"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0","Type":"ContainerStarted","Data":"d7855d9ca6a7ab78b4abd7cc27af768c258f7fb42a42a0ceff0b2defe61db629"} Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.063348 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-598ff49d66-g94f6" event={"ID":"5a39a6de-4690-4845-af62-7cb05de93909","Type":"ContainerStarted","Data":"a5f11c8432dd06adee792360ec103dbf08603823fba9e69996b80b25411bbaf6"} Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.072979 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.165003 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jxszs" podStartSLOduration=8.899121774 podStartE2EDuration="45.164974193s" podCreationTimestamp="2025-10-06 15:17:47 +0000 UTC" firstStartedPulling="2025-10-06 15:17:54.495844298 +0000 UTC m=+1014.308195016" lastFinishedPulling="2025-10-06 15:18:30.761696727 +0000 UTC m=+1050.574047435" observedRunningTime="2025-10-06 15:18:32.030431005 +0000 UTC m=+1051.842781723" watchObservedRunningTime="2025-10-06 15:18:32.164974193 +0000 UTC m=+1051.977324911" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.215835 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8547c88cc-gnkgd"] Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.272730 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-942kl"] Oct 06 15:18:32 crc kubenswrapper[4888]: E1006 15:18:32.288939 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerName="horizon-log" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.288963 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerName="horizon-log" Oct 06 15:18:32 crc kubenswrapper[4888]: E1006 15:18:32.288987 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerName="horizon" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.288997 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerName="horizon" Oct 06 15:18:32 crc kubenswrapper[4888]: E1006 15:18:32.289007 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a088206b-bb6d-455d-b223-689888a75b1c" containerName="horizon" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.289015 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="a088206b-bb6d-455d-b223-689888a75b1c" containerName="horizon" Oct 06 15:18:32 crc kubenswrapper[4888]: E1006 15:18:32.289039 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e36f3be-2b0f-45e6-8275-b66240419057" containerName="neutron-db-sync" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.289046 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e36f3be-2b0f-45e6-8275-b66240419057" containerName="neutron-db-sync" Oct 06 15:18:32 crc kubenswrapper[4888]: E1006 15:18:32.289061 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a088206b-bb6d-455d-b223-689888a75b1c" containerName="horizon-log" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.289067 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="a088206b-bb6d-455d-b223-689888a75b1c" containerName="horizon-log" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.289288 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerName="horizon" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.289310 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" containerName="horizon-log" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.289320 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e36f3be-2b0f-45e6-8275-b66240419057" containerName="neutron-db-sync" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.289334 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="a088206b-bb6d-455d-b223-689888a75b1c" containerName="horizon-log" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.289341 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="a088206b-bb6d-455d-b223-689888a75b1c" containerName="horizon" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.290503 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.305630 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8547c88cc-gnkgd"] Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.325415 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-942kl"] Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.326133 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-598ff49d66-g94f6" podStartSLOduration=24.326117568 podStartE2EDuration="24.326117568s" podCreationTimestamp="2025-10-06 15:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:32.209574008 +0000 UTC m=+1052.021924746" watchObservedRunningTime="2025-10-06 15:18:32.326117568 +0000 UTC m=+1052.138468286" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.357965 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67678875c7-wxfg8"] Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.366506 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.366616 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnc9x\" (UniqueName: \"kubernetes.io/projected/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-kube-api-access-mnc9x\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.366681 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.366708 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.366756 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-config\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.366789 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.370788 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67678875c7-wxfg8"] Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.390918 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6896cb7668-gxlzd"] Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.392609 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.415436 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.415680 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.415862 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pbrvh" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.415949 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.440871 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6896cb7668-gxlzd"] Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.468750 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-httpd-config\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.476879 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.476933 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.476981 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-config\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.477050 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-config\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.477084 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.477180 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-combined-ca-bundle\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.477251 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.477297 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mgsq\" (UniqueName: \"kubernetes.io/projected/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-kube-api-access-4mgsq\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.477350 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-ovndb-tls-certs\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.477492 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnc9x\" (UniqueName: \"kubernetes.io/projected/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-kube-api-access-mnc9x\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.478881 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.479557 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.480944 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.481320 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.481765 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-config\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.508897 4888 scope.go:117] "RemoveContainer" containerID="b1762979c77cf1f37fa7ad043379403decb7aa8de51b45ef57d197065bde8d52" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.512544 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnc9x\" (UniqueName: \"kubernetes.io/projected/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-kube-api-access-mnc9x\") pod \"dnsmasq-dns-84b966f6c9-942kl\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.579420 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-combined-ca-bundle\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.579526 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mgsq\" (UniqueName: \"kubernetes.io/projected/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-kube-api-access-4mgsq\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.579559 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-ovndb-tls-certs\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.579698 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-httpd-config\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.579761 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-config\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.584635 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-ovndb-tls-certs\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.585366 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-combined-ca-bundle\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.586247 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-config\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.586701 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-httpd-config\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.608479 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mgsq\" (UniqueName: \"kubernetes.io/projected/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-kube-api-access-4mgsq\") pod \"neutron-6896cb7668-gxlzd\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.637389 4888 scope.go:117] "RemoveContainer" containerID="f1e0de4d69381cc8e31ccfc576a282f1ca869f8c025dc04bc66480d6c40f1791" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.641346 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.724888 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.939937 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a088206b-bb6d-455d-b223-689888a75b1c" path="/var/lib/kubelet/pods/a088206b-bb6d-455d-b223-689888a75b1c/volumes" Oct 06 15:18:32 crc kubenswrapper[4888]: I1006 15:18:32.945633 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1dd4b83-bf5d-4198-b411-97a5a8d057d6" path="/var/lib/kubelet/pods/e1dd4b83-bf5d-4198-b411-97a5a8d057d6/volumes" Oct 06 15:18:33 crc kubenswrapper[4888]: I1006 15:18:33.061120 4888 scope.go:117] "RemoveContainer" containerID="edfc78ea49fc0a966637f77fa374d172d46360082daa30f145a38d5cbaf094c8" Oct 06 15:18:33 crc kubenswrapper[4888]: I1006 15:18:33.140137 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-598ff49d66-g94f6" event={"ID":"5a39a6de-4690-4845-af62-7cb05de93909","Type":"ContainerStarted","Data":"01df57c47106c7284cd61356255e15baa2991c0c120c266327f1b224ad77280c"} Oct 06 15:18:33 crc kubenswrapper[4888]: I1006 15:18:33.156448 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9885565f4-9j2hk" event={"ID":"3fe4a962-ac46-4084-a054-4b7499863e84","Type":"ContainerStarted","Data":"bc615dbbafe06784c64f62fac413a559413500cf9f129abd2e5040e2e5e5487c"} Oct 06 15:18:33 crc kubenswrapper[4888]: I1006 15:18:33.156982 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:33 crc kubenswrapper[4888]: I1006 15:18:33.213052 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9885565f4-9j2hk" podStartSLOduration=31.213029501 podStartE2EDuration="31.213029501s" podCreationTimestamp="2025-10-06 15:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:33.184576575 +0000 UTC m=+1052.996927293" watchObservedRunningTime="2025-10-06 15:18:33.213029501 +0000 UTC m=+1053.025380209" Oct 06 15:18:33 crc kubenswrapper[4888]: I1006 15:18:33.464774 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-942kl"] Oct 06 15:18:33 crc kubenswrapper[4888]: I1006 15:18:33.860772 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6896cb7668-gxlzd"] Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.171460 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6896cb7668-gxlzd" event={"ID":"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606","Type":"ContainerStarted","Data":"142f00b70423cf748c45236953e287f36a881de53da375bf304bf206d3a5b7ab"} Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.174723 4888 generic.go:334] "Generic (PLEG): container finished" podID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerID="5c65f457cf4653586e94ca46a16612c64567b0ccf39f63b7b82071d901ebcd62" exitCode=0 Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.174779 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" event={"ID":"925dde25-ff92-4e6a-9cb8-546fe13c9d6c","Type":"ContainerDied","Data":"5c65f457cf4653586e94ca46a16612c64567b0ccf39f63b7b82071d901ebcd62"} Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.174812 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" event={"ID":"925dde25-ff92-4e6a-9cb8-546fe13c9d6c","Type":"ContainerStarted","Data":"e8bce8dc7e58ab36080e84c00c47dbd3a175c31267df0cb0c2cc7b25761a5720"} Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.185294 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.774965 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6588c6d648-nnxsr" Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.851998 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ffc855b96-nhf9w"] Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.852635 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" containerID="cri-o://81d5e6ec06435d3284bb89303c374e53eb6fe1e13976a3a09a106e21eef6b6f4" gracePeriod=30 Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.852754 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon-log" containerID="cri-o://2df7a1bb23460b3296677c8de731421c3f3ab7fd0970a63e26450d729add3c3c" gracePeriod=30 Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.873603 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:18:34 crc kubenswrapper[4888]: I1006 15:18:34.887939 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:52114->10.217.0.145:8443: read: connection reset by peer" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.195989 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6896cb7668-gxlzd" event={"ID":"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606","Type":"ContainerStarted","Data":"54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23"} Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.196320 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6896cb7668-gxlzd" event={"ID":"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606","Type":"ContainerStarted","Data":"985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe"} Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.196341 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.202046 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" event={"ID":"925dde25-ff92-4e6a-9cb8-546fe13c9d6c","Type":"ContainerStarted","Data":"69d8f2ae049ad8866e69a1b3c3cb421edddde01669c2d2202c5d990dd3ee587f"} Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.202251 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.223736 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6896cb7668-gxlzd" podStartSLOduration=3.223713885 podStartE2EDuration="3.223713885s" podCreationTimestamp="2025-10-06 15:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:35.220915007 +0000 UTC m=+1055.033265725" watchObservedRunningTime="2025-10-06 15:18:35.223713885 +0000 UTC m=+1055.036064603" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.252426 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" podStartSLOduration=3.252400019 podStartE2EDuration="3.252400019s" podCreationTimestamp="2025-10-06 15:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:35.245055497 +0000 UTC m=+1055.057406245" watchObservedRunningTime="2025-10-06 15:18:35.252400019 +0000 UTC m=+1055.064750737" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.605824 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7dc98fc94f-nlvnj"] Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.607438 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.610046 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.610316 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.623969 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dc98fc94f-nlvnj"] Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.766690 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-ovndb-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.766746 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-config\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.766783 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-combined-ca-bundle\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.766919 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmnnn\" (UniqueName: \"kubernetes.io/projected/39bde926-1f59-45eb-8f71-841d380f9c5d-kube-api-access-mmnnn\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.767036 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-public-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.767107 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-internal-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.767130 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-httpd-config\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.868425 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-internal-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.868470 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-httpd-config\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.868512 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-ovndb-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.868530 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-config\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.868557 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-combined-ca-bundle\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.869360 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmnnn\" (UniqueName: \"kubernetes.io/projected/39bde926-1f59-45eb-8f71-841d380f9c5d-kube-api-access-mmnnn\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.869427 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-public-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.876148 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-combined-ca-bundle\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.876582 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-internal-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.877447 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-httpd-config\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.877555 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-public-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.877572 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-config\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.877925 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/39bde926-1f59-45eb-8f71-841d380f9c5d-ovndb-tls-certs\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.895713 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmnnn\" (UniqueName: \"kubernetes.io/projected/39bde926-1f59-45eb-8f71-841d380f9c5d-kube-api-access-mmnnn\") pod \"neutron-7dc98fc94f-nlvnj\" (UID: \"39bde926-1f59-45eb-8f71-841d380f9c5d\") " pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:35 crc kubenswrapper[4888]: I1006 15:18:35.934264 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:36 crc kubenswrapper[4888]: I1006 15:18:36.517376 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dc98fc94f-nlvnj"] Oct 06 15:18:36 crc kubenswrapper[4888]: W1006 15:18:36.530623 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39bde926_1f59_45eb_8f71_841d380f9c5d.slice/crio-8cf9d3b662a2cabf54bb7ed2b4d6d9aa2d63ddc32fa01758b4f124b7fdd3052b WatchSource:0}: Error finding container 8cf9d3b662a2cabf54bb7ed2b4d6d9aa2d63ddc32fa01758b4f124b7fdd3052b: Status 404 returned error can't find the container with id 8cf9d3b662a2cabf54bb7ed2b4d6d9aa2d63ddc32fa01758b4f124b7fdd3052b Oct 06 15:18:37 crc kubenswrapper[4888]: I1006 15:18:37.223349 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc98fc94f-nlvnj" event={"ID":"39bde926-1f59-45eb-8f71-841d380f9c5d","Type":"ContainerStarted","Data":"8cf9d3b662a2cabf54bb7ed2b4d6d9aa2d63ddc32fa01758b4f124b7fdd3052b"} Oct 06 15:18:37 crc kubenswrapper[4888]: I1006 15:18:37.226684 4888 generic.go:334] "Generic (PLEG): container finished" podID="832f4bfd-1fa5-48ca-87c4-eecd280e1aa0" containerID="d7855d9ca6a7ab78b4abd7cc27af768c258f7fb42a42a0ceff0b2defe61db629" exitCode=0 Oct 06 15:18:37 crc kubenswrapper[4888]: I1006 15:18:37.226727 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jxszs" event={"ID":"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0","Type":"ContainerDied","Data":"d7855d9ca6a7ab78b4abd7cc27af768c258f7fb42a42a0ceff0b2defe61db629"} Oct 06 15:18:38 crc kubenswrapper[4888]: I1006 15:18:38.019210 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:52122->10.217.0.145:8443: read: connection reset by peer" Oct 06 15:18:38 crc kubenswrapper[4888]: I1006 15:18:38.236658 4888 generic.go:334] "Generic (PLEG): container finished" podID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerID="81d5e6ec06435d3284bb89303c374e53eb6fe1e13976a3a09a106e21eef6b6f4" exitCode=0 Oct 06 15:18:38 crc kubenswrapper[4888]: I1006 15:18:38.237036 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffc855b96-nhf9w" event={"ID":"0574c745-cac5-4deb-87cc-a04c1b09aa9a","Type":"ContainerDied","Data":"81d5e6ec06435d3284bb89303c374e53eb6fe1e13976a3a09a106e21eef6b6f4"} Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.120459 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jxszs" Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.166081 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvzsg\" (UniqueName: \"kubernetes.io/projected/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-kube-api-access-dvzsg\") pod \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.166321 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-db-sync-config-data\") pod \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.166361 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-combined-ca-bundle\") pod \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\" (UID: \"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0\") " Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.175479 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "832f4bfd-1fa5-48ca-87c4-eecd280e1aa0" (UID: "832f4bfd-1fa5-48ca-87c4-eecd280e1aa0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.189373 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-kube-api-access-dvzsg" (OuterVolumeSpecName: "kube-api-access-dvzsg") pod "832f4bfd-1fa5-48ca-87c4-eecd280e1aa0" (UID: "832f4bfd-1fa5-48ca-87c4-eecd280e1aa0"). InnerVolumeSpecName "kube-api-access-dvzsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.202045 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "832f4bfd-1fa5-48ca-87c4-eecd280e1aa0" (UID: "832f4bfd-1fa5-48ca-87c4-eecd280e1aa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.261651 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jxszs" event={"ID":"832f4bfd-1fa5-48ca-87c4-eecd280e1aa0","Type":"ContainerDied","Data":"06f2653edf2e174ce457c0f975a8a4e4c21ce25cb78a5494083b9ef86cc52aa6"} Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.261687 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06f2653edf2e174ce457c0f975a8a4e4c21ce25cb78a5494083b9ef86cc52aa6" Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.261744 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jxszs" Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.268299 4888 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.268324 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:40 crc kubenswrapper[4888]: I1006 15:18:40.268333 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvzsg\" (UniqueName: \"kubernetes.io/projected/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0-kube-api-access-dvzsg\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.412012 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-67fcdb97f5-n5qtk"] Oct 06 15:18:41 crc kubenswrapper[4888]: E1006 15:18:41.413409 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832f4bfd-1fa5-48ca-87c4-eecd280e1aa0" containerName="barbican-db-sync" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.413437 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="832f4bfd-1fa5-48ca-87c4-eecd280e1aa0" containerName="barbican-db-sync" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.413683 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="832f4bfd-1fa5-48ca-87c4-eecd280e1aa0" containerName="barbican-db-sync" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.414836 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.429579 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.429679 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8dkk6" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.429894 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.438178 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-67fcdb97f5-n5qtk"] Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.475947 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b6957c776-hxrm5"] Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.477543 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.483668 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.493780 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5qs\" (UniqueName: \"kubernetes.io/projected/6cf865ad-c3ca-4633-8f09-12865f2e3772-kube-api-access-9d5qs\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.493855 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-config-data\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.493875 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-config-data\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.493894 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1f4bff-0c57-4619-bc07-90aec0cc064c-logs\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.493926 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.494005 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c52q\" (UniqueName: \"kubernetes.io/projected/6f1f4bff-0c57-4619-bc07-90aec0cc064c-kube-api-access-9c52q\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.494034 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-config-data-custom\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.494065 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-config-data-custom\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.494096 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cf865ad-c3ca-4633-8f09-12865f2e3772-logs\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.494161 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-combined-ca-bundle\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.504510 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b6957c776-hxrm5"] Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597159 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c52q\" (UniqueName: \"kubernetes.io/projected/6f1f4bff-0c57-4619-bc07-90aec0cc064c-kube-api-access-9c52q\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597235 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-config-data-custom\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597260 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-config-data-custom\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597284 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cf865ad-c3ca-4633-8f09-12865f2e3772-logs\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597340 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-combined-ca-bundle\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597483 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5qs\" (UniqueName: \"kubernetes.io/projected/6cf865ad-c3ca-4633-8f09-12865f2e3772-kube-api-access-9d5qs\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597515 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-config-data\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597537 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-config-data\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.597560 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1f4bff-0c57-4619-bc07-90aec0cc064c-logs\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.599469 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.600341 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cf865ad-c3ca-4633-8f09-12865f2e3772-logs\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.603024 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1f4bff-0c57-4619-bc07-90aec0cc064c-logs\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.607693 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-combined-ca-bundle\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.621665 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-config-data-custom\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.629569 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-combined-ca-bundle\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.631655 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-config-data\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.633609 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f1f4bff-0c57-4619-bc07-90aec0cc064c-config-data-custom\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.638393 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cf865ad-c3ca-4633-8f09-12865f2e3772-config-data\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.667095 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-942kl"] Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.680688 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" podUID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerName="dnsmasq-dns" containerID="cri-o://69d8f2ae049ad8866e69a1b3c3cb421edddde01669c2d2202c5d990dd3ee587f" gracePeriod=10 Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.684951 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.688194 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c52q\" (UniqueName: \"kubernetes.io/projected/6f1f4bff-0c57-4619-bc07-90aec0cc064c-kube-api-access-9c52q\") pod \"barbican-keystone-listener-6b6957c776-hxrm5\" (UID: \"6f1f4bff-0c57-4619-bc07-90aec0cc064c\") " pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.692451 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5qs\" (UniqueName: \"kubernetes.io/projected/6cf865ad-c3ca-4633-8f09-12865f2e3772-kube-api-access-9d5qs\") pod \"barbican-worker-67fcdb97f5-n5qtk\" (UID: \"6cf865ad-c3ca-4633-8f09-12865f2e3772\") " pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.729578 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-4dcbt"] Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.731164 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.740554 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67fcdb97f5-n5qtk" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.744820 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-4dcbt"] Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.771575 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-645dff855d-nzssq"] Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.773526 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.775467 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.821007 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-645dff855d-nzssq"] Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.829220 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907168 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907222 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data-custom\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907286 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-combined-ca-bundle\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907358 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-config\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907390 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907415 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzqtw\" (UniqueName: \"kubernetes.io/projected/eb24b00b-412e-45ef-a960-5e291746d95e-kube-api-access-wzqtw\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907564 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907614 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/780d72b4-5817-49d7-bca7-4eca7daf63df-logs\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907706 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907830 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klt7\" (UniqueName: \"kubernetes.io/projected/780d72b4-5817-49d7-bca7-4eca7daf63df-kube-api-access-4klt7\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:41 crc kubenswrapper[4888]: I1006 15:18:41.907898 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.009638 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.009748 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klt7\" (UniqueName: \"kubernetes.io/projected/780d72b4-5817-49d7-bca7-4eca7daf63df-kube-api-access-4klt7\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.009898 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.009943 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.009970 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data-custom\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.010008 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-combined-ca-bundle\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.010055 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-config\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.010089 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.010115 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzqtw\" (UniqueName: \"kubernetes.io/projected/eb24b00b-412e-45ef-a960-5e291746d95e-kube-api-access-wzqtw\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.010163 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.010217 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/780d72b4-5817-49d7-bca7-4eca7daf63df-logs\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.011588 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.012715 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.012716 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.012703 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.012936 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-config\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.013351 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/780d72b4-5817-49d7-bca7-4eca7daf63df-logs\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.015969 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data-custom\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.019182 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-combined-ca-bundle\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.021920 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.038312 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klt7\" (UniqueName: \"kubernetes.io/projected/780d72b4-5817-49d7-bca7-4eca7daf63df-kube-api-access-4klt7\") pod \"barbican-api-645dff855d-nzssq\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.038633 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzqtw\" (UniqueName: \"kubernetes.io/projected/eb24b00b-412e-45ef-a960-5e291746d95e-kube-api-access-wzqtw\") pod \"dnsmasq-dns-75c8ddd69c-4dcbt\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.094315 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.120341 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.288020 4888 generic.go:334] "Generic (PLEG): container finished" podID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerID="69d8f2ae049ad8866e69a1b3c3cb421edddde01669c2d2202c5d990dd3ee587f" exitCode=0 Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.288070 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" event={"ID":"925dde25-ff92-4e6a-9cb8-546fe13c9d6c","Type":"ContainerDied","Data":"69d8f2ae049ad8866e69a1b3c3cb421edddde01669c2d2202c5d990dd3ee587f"} Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.642654 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" podUID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Oct 06 15:18:42 crc kubenswrapper[4888]: I1006 15:18:42.957309 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.036289 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-config\") pod \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.036362 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-nb\") pod \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.036379 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-svc\") pod \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.036467 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-swift-storage-0\") pod \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.036581 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-sb\") pod \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.036632 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnc9x\" (UniqueName: \"kubernetes.io/projected/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-kube-api-access-mnc9x\") pod \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\" (UID: \"925dde25-ff92-4e6a-9cb8-546fe13c9d6c\") " Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.084519 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-kube-api-access-mnc9x" (OuterVolumeSpecName: "kube-api-access-mnc9x") pod "925dde25-ff92-4e6a-9cb8-546fe13c9d6c" (UID: "925dde25-ff92-4e6a-9cb8-546fe13c9d6c"). InnerVolumeSpecName "kube-api-access-mnc9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.139778 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnc9x\" (UniqueName: \"kubernetes.io/projected/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-kube-api-access-mnc9x\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.322629 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" event={"ID":"925dde25-ff92-4e6a-9cb8-546fe13c9d6c","Type":"ContainerDied","Data":"e8bce8dc7e58ab36080e84c00c47dbd3a175c31267df0cb0c2cc7b25761a5720"} Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.322676 4888 scope.go:117] "RemoveContainer" containerID="69d8f2ae049ad8866e69a1b3c3cb421edddde01669c2d2202c5d990dd3ee587f" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.322785 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-942kl" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.335359 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "925dde25-ff92-4e6a-9cb8-546fe13c9d6c" (UID: "925dde25-ff92-4e6a-9cb8-546fe13c9d6c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.347040 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.396519 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "925dde25-ff92-4e6a-9cb8-546fe13c9d6c" (UID: "925dde25-ff92-4e6a-9cb8-546fe13c9d6c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.397605 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "925dde25-ff92-4e6a-9cb8-546fe13c9d6c" (UID: "925dde25-ff92-4e6a-9cb8-546fe13c9d6c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.402740 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-config" (OuterVolumeSpecName: "config") pod "925dde25-ff92-4e6a-9cb8-546fe13c9d6c" (UID: "925dde25-ff92-4e6a-9cb8-546fe13c9d6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.454308 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.454333 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.454344 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.499778 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "925dde25-ff92-4e6a-9cb8-546fe13c9d6c" (UID: "925dde25-ff92-4e6a-9cb8-546fe13c9d6c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.511835 4888 scope.go:117] "RemoveContainer" containerID="5c65f457cf4653586e94ca46a16612c64567b0ccf39f63b7b82071d901ebcd62" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.557522 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/925dde25-ff92-4e6a-9cb8-546fe13c9d6c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.583367 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-598ff49d66-g94f6" Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.599595 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-67fcdb97f5-n5qtk"] Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.622113 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-645dff855d-nzssq"] Oct 06 15:18:43 crc kubenswrapper[4888]: W1006 15:18:43.667990 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod780d72b4_5817_49d7_bca7_4eca7daf63df.slice/crio-222dab747425b358a36171ba8fc00a21e1c88e92244b6e39fcfd623d05526f3d WatchSource:0}: Error finding container 222dab747425b358a36171ba8fc00a21e1c88e92244b6e39fcfd623d05526f3d: Status 404 returned error can't find the container with id 222dab747425b358a36171ba8fc00a21e1c88e92244b6e39fcfd623d05526f3d Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.741971 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-942kl"] Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.749043 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-942kl"] Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.792672 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-4dcbt"] Oct 06 15:18:43 crc kubenswrapper[4888]: I1006 15:18:43.813832 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b6957c776-hxrm5"] Oct 06 15:18:43 crc kubenswrapper[4888]: W1006 15:18:43.846015 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f1f4bff_0c57_4619_bc07_90aec0cc064c.slice/crio-62c7f876774b89f4da31c8a450a9ce3e3f668bec0e6667e301e6d0f082fdc05d WatchSource:0}: Error finding container 62c7f876774b89f4da31c8a450a9ce3e3f668bec0e6667e301e6d0f082fdc05d: Status 404 returned error can't find the container with id 62c7f876774b89f4da31c8a450a9ce3e3f668bec0e6667e301e6d0f082fdc05d Oct 06 15:18:43 crc kubenswrapper[4888]: E1006 15:18:43.935353 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.332859 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ceb4186-79b8-4dc6-b54c-7e0681764d35","Type":"ContainerStarted","Data":"c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.333209 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.333050 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerName="ceilometer-notification-agent" containerID="cri-o://e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24" gracePeriod=30 Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.333333 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerName="proxy-httpd" containerID="cri-o://c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065" gracePeriod=30 Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.343948 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-645dff855d-nzssq" event={"ID":"780d72b4-5817-49d7-bca7-4eca7daf63df","Type":"ContainerStarted","Data":"788feb770f48bdc17465c33dcd2ac15c70dfc2d54f787fb66fbe42f218b62809"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.344010 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-645dff855d-nzssq" event={"ID":"780d72b4-5817-49d7-bca7-4eca7daf63df","Type":"ContainerStarted","Data":"270ee48e80e0db6b923007f2dca28a7a3f56dfb776d4946e765971cfcb203e78"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.344027 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-645dff855d-nzssq" event={"ID":"780d72b4-5817-49d7-bca7-4eca7daf63df","Type":"ContainerStarted","Data":"222dab747425b358a36171ba8fc00a21e1c88e92244b6e39fcfd623d05526f3d"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.344098 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.344629 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.350623 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" event={"ID":"6f1f4bff-0c57-4619-bc07-90aec0cc064c","Type":"ContainerStarted","Data":"62c7f876774b89f4da31c8a450a9ce3e3f668bec0e6667e301e6d0f082fdc05d"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.367147 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc98fc94f-nlvnj" event={"ID":"39bde926-1f59-45eb-8f71-841d380f9c5d","Type":"ContainerStarted","Data":"ef506e221e28a694e1cc31157af8e911f5f085dbb57ad68fce1f87863f1290e4"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.367191 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc98fc94f-nlvnj" event={"ID":"39bde926-1f59-45eb-8f71-841d380f9c5d","Type":"ContainerStarted","Data":"5a6636bfa1f88b2f617a8b02a3063f0e07f404254650404926f4d7e0c6ef690b"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.368045 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.393037 4888 generic.go:334] "Generic (PLEG): container finished" podID="eb24b00b-412e-45ef-a960-5e291746d95e" containerID="d20de1b2420c7c7d60ead648c5b48300b2de981eeb1dc339bc967d63804db48d" exitCode=0 Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.393107 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" event={"ID":"eb24b00b-412e-45ef-a960-5e291746d95e","Type":"ContainerDied","Data":"d20de1b2420c7c7d60ead648c5b48300b2de981eeb1dc339bc967d63804db48d"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.393133 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" event={"ID":"eb24b00b-412e-45ef-a960-5e291746d95e","Type":"ContainerStarted","Data":"88a7e4235c7bb38235b997e5f5fbb0cf595433b7e07bdce918e6cc0430f74b95"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.393624 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-645dff855d-nzssq" podStartSLOduration=3.393607259 podStartE2EDuration="3.393607259s" podCreationTimestamp="2025-10-06 15:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:44.392017039 +0000 UTC m=+1064.204367757" watchObservedRunningTime="2025-10-06 15:18:44.393607259 +0000 UTC m=+1064.205957977" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.405376 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67fcdb97f5-n5qtk" event={"ID":"6cf865ad-c3ca-4633-8f09-12865f2e3772","Type":"ContainerStarted","Data":"1486bb347fe61d02cb248fed81571434efda5ef528a7491b649c69f4e6834442"} Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.418860 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7dc98fc94f-nlvnj" podStartSLOduration=9.418840964 podStartE2EDuration="9.418840964s" podCreationTimestamp="2025-10-06 15:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:44.417522743 +0000 UTC m=+1064.229873471" watchObservedRunningTime="2025-10-06 15:18:44.418840964 +0000 UTC m=+1064.231191682" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.853367 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 06 15:18:44 crc kubenswrapper[4888]: E1006 15:18:44.861813 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerName="dnsmasq-dns" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.862094 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerName="dnsmasq-dns" Oct 06 15:18:44 crc kubenswrapper[4888]: E1006 15:18:44.862345 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerName="init" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.862389 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerName="init" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.863050 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" containerName="dnsmasq-dns" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.868982 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.872532 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xm2fx" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.873961 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.874183 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.903906 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.953030 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925dde25-ff92-4e6a-9cb8-546fe13c9d6c" path="/var/lib/kubelet/pods/925dde25-ff92-4e6a-9cb8-546fe13c9d6c/volumes" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.990942 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.991045 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqp9\" (UniqueName: \"kubernetes.io/projected/8e42f07f-8183-4763-86c6-215d001788b2-kube-api-access-ztqp9\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.991245 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:44 crc kubenswrapper[4888]: I1006 15:18:44.991653 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.082402 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 06 15:18:45 crc kubenswrapper[4888]: E1006 15:18:45.083243 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-ztqp9 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="8e42f07f-8183-4763-86c6-215d001788b2" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.093861 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.094719 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.094828 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.094937 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.095006 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqp9\" (UniqueName: \"kubernetes.io/projected/8e42f07f-8183-4763-86c6-215d001788b2-kube-api-access-ztqp9\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.096120 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.099085 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: E1006 15:18:45.099345 4888 projected.go:194] Error preparing data for projected volume kube-api-access-ztqp9 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 06 15:18:45 crc kubenswrapper[4888]: E1006 15:18:45.099425 4888 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e42f07f-8183-4763-86c6-215d001788b2-kube-api-access-ztqp9 podName:8e42f07f-8183-4763-86c6-215d001788b2 nodeName:}" failed. No retries permitted until 2025-10-06 15:18:45.599404408 +0000 UTC m=+1065.411755126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ztqp9" (UniqueName: "kubernetes.io/projected/8e42f07f-8183-4763-86c6-215d001788b2-kube-api-access-ztqp9") pod "openstackclient" (UID: "8e42f07f-8183-4763-86c6-215d001788b2") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.099705 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.173762 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.175225 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.185554 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.301666 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655523f3-6f3b-4675-8b5a-4c0451a185ca-combined-ca-bundle\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.301719 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvzf\" (UniqueName: \"kubernetes.io/projected/655523f3-6f3b-4675-8b5a-4c0451a185ca-kube-api-access-6kvzf\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.301820 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/655523f3-6f3b-4675-8b5a-4c0451a185ca-openstack-config\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.301931 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/655523f3-6f3b-4675-8b5a-4c0451a185ca-openstack-config-secret\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.404874 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/655523f3-6f3b-4675-8b5a-4c0451a185ca-openstack-config\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.404993 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/655523f3-6f3b-4675-8b5a-4c0451a185ca-openstack-config-secret\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.405046 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655523f3-6f3b-4675-8b5a-4c0451a185ca-combined-ca-bundle\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.405065 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kvzf\" (UniqueName: \"kubernetes.io/projected/655523f3-6f3b-4675-8b5a-4c0451a185ca-kube-api-access-6kvzf\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.406074 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/655523f3-6f3b-4675-8b5a-4c0451a185ca-openstack-config\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.422905 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655523f3-6f3b-4675-8b5a-4c0451a185ca-combined-ca-bundle\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.423943 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/655523f3-6f3b-4675-8b5a-4c0451a185ca-openstack-config-secret\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.448462 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kvzf\" (UniqueName: \"kubernetes.io/projected/655523f3-6f3b-4675-8b5a-4c0451a185ca-kube-api-access-6kvzf\") pod \"openstackclient\" (UID: \"655523f3-6f3b-4675-8b5a-4c0451a185ca\") " pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.451103 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" event={"ID":"eb24b00b-412e-45ef-a960-5e291746d95e","Type":"ContainerStarted","Data":"f43b2940985f5a0603fb7a9721abb119ac1784338eab46672c81d79516e96aec"} Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.452269 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.460898 4888 generic.go:334] "Generic (PLEG): container finished" podID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerID="c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065" exitCode=0 Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.461823 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ceb4186-79b8-4dc6-b54c-7e0681764d35","Type":"ContainerDied","Data":"c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065"} Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.462434 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.482342 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" podStartSLOduration=4.482324759 podStartE2EDuration="4.482324759s" podCreationTimestamp="2025-10-06 15:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:45.47918993 +0000 UTC m=+1065.291540648" watchObservedRunningTime="2025-10-06 15:18:45.482324759 +0000 UTC m=+1065.294675477" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.482685 4888 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8e42f07f-8183-4763-86c6-215d001788b2" podUID="655523f3-6f3b-4675-8b5a-4c0451a185ca" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.526908 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.561906 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-fc6dd5fdd-d2l7d"] Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.567750 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.580232 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.585452 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.592730 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.598983 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fc6dd5fdd-d2l7d"] Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.612519 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config\") pod \"8e42f07f-8183-4763-86c6-215d001788b2\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.612641 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-combined-ca-bundle\") pod \"8e42f07f-8183-4763-86c6-215d001788b2\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.612675 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config-secret\") pod \"8e42f07f-8183-4763-86c6-215d001788b2\" (UID: \"8e42f07f-8183-4763-86c6-215d001788b2\") " Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613028 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-config-data-custom\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613070 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-internal-tls-certs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613072 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8e42f07f-8183-4763-86c6-215d001788b2" (UID: "8e42f07f-8183-4763-86c6-215d001788b2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613104 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-config-data\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613192 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-public-tls-certs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613395 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-combined-ca-bundle\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613422 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0696f900-55ec-420c-a00a-a8e749b36aa0-logs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613500 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwsgz\" (UniqueName: \"kubernetes.io/projected/0696f900-55ec-420c-a00a-a8e749b36aa0-kube-api-access-lwsgz\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613593 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztqp9\" (UniqueName: \"kubernetes.io/projected/8e42f07f-8183-4763-86c6-215d001788b2-kube-api-access-ztqp9\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.613610 4888 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.617725 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e42f07f-8183-4763-86c6-215d001788b2" (UID: "8e42f07f-8183-4763-86c6-215d001788b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.617815 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8e42f07f-8183-4763-86c6-215d001788b2" (UID: "8e42f07f-8183-4763-86c6-215d001788b2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715183 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-config-data\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715248 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-public-tls-certs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715285 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-combined-ca-bundle\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715304 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0696f900-55ec-420c-a00a-a8e749b36aa0-logs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715358 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwsgz\" (UniqueName: \"kubernetes.io/projected/0696f900-55ec-420c-a00a-a8e749b36aa0-kube-api-access-lwsgz\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715426 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-config-data-custom\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715442 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-internal-tls-certs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715485 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715497 4888 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e42f07f-8183-4763-86c6-215d001788b2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.715818 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0696f900-55ec-420c-a00a-a8e749b36aa0-logs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.720462 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-config-data-custom\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.724618 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-internal-tls-certs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.726086 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-config-data\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.726708 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-public-tls-certs\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.731682 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0696f900-55ec-420c-a00a-a8e749b36aa0-combined-ca-bundle\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.742227 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwsgz\" (UniqueName: \"kubernetes.io/projected/0696f900-55ec-420c-a00a-a8e749b36aa0-kube-api-access-lwsgz\") pod \"barbican-api-fc6dd5fdd-d2l7d\" (UID: \"0696f900-55ec-420c-a00a-a8e749b36aa0\") " pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:45 crc kubenswrapper[4888]: I1006 15:18:45.887912 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:46 crc kubenswrapper[4888]: I1006 15:18:46.468882 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 15:18:46 crc kubenswrapper[4888]: I1006 15:18:46.483918 4888 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8e42f07f-8183-4763-86c6-215d001788b2" podUID="655523f3-6f3b-4675-8b5a-4c0451a185ca" Oct 06 15:18:46 crc kubenswrapper[4888]: I1006 15:18:46.804198 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Oct 06 15:18:46 crc kubenswrapper[4888]: I1006 15:18:46.804488 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:18:46 crc kubenswrapper[4888]: I1006 15:18:46.944596 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e42f07f-8183-4763-86c6-215d001788b2" path="/var/lib/kubelet/pods/8e42f07f-8183-4763-86c6-215d001788b2/volumes" Oct 06 15:18:47 crc kubenswrapper[4888]: I1006 15:18:47.105556 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fc6dd5fdd-d2l7d"] Oct 06 15:18:47 crc kubenswrapper[4888]: I1006 15:18:47.179997 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 15:18:47 crc kubenswrapper[4888]: W1006 15:18:47.202772 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod655523f3_6f3b_4675_8b5a_4c0451a185ca.slice/crio-1998e3ef2c44b4e5a4826032e402a3fd5004067ee5a99e302d46ff85783f023f WatchSource:0}: Error finding container 1998e3ef2c44b4e5a4826032e402a3fd5004067ee5a99e302d46ff85783f023f: Status 404 returned error can't find the container with id 1998e3ef2c44b4e5a4826032e402a3fd5004067ee5a99e302d46ff85783f023f Oct 06 15:18:47 crc kubenswrapper[4888]: I1006 15:18:47.535818 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"655523f3-6f3b-4675-8b5a-4c0451a185ca","Type":"ContainerStarted","Data":"1998e3ef2c44b4e5a4826032e402a3fd5004067ee5a99e302d46ff85783f023f"} Oct 06 15:18:47 crc kubenswrapper[4888]: I1006 15:18:47.541255 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" event={"ID":"6f1f4bff-0c57-4619-bc07-90aec0cc064c","Type":"ContainerStarted","Data":"ddefb562c39222c4f363b57c06ab6e8d087a01bb3f281d8191fa473f42a53205"} Oct 06 15:18:47 crc kubenswrapper[4888]: I1006 15:18:47.542486 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" event={"ID":"0696f900-55ec-420c-a00a-a8e749b36aa0","Type":"ContainerStarted","Data":"4a9e4370f3acc264b58d9eec06d063cc0019c9deb015f9ea315914a5120144bb"} Oct 06 15:18:47 crc kubenswrapper[4888]: I1006 15:18:47.547589 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67fcdb97f5-n5qtk" event={"ID":"6cf865ad-c3ca-4633-8f09-12865f2e3772","Type":"ContainerStarted","Data":"9c2204aaa47a3f81b7473cec3d667271b0a44cb9c2ad83479b4d7b1b68923405"} Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.558597 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" event={"ID":"0696f900-55ec-420c-a00a-a8e749b36aa0","Type":"ContainerStarted","Data":"6339bb0c150d51cf4f5c54c8eb8f6135f008d251294d370c0a64f32aa3542db5"} Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.559169 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" event={"ID":"0696f900-55ec-420c-a00a-a8e749b36aa0","Type":"ContainerStarted","Data":"c75e568eb77cfdb23ef78b669f4117cad86aff36b35da7fd86b1676e989d1227"} Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.560365 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.560392 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.565252 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67fcdb97f5-n5qtk" event={"ID":"6cf865ad-c3ca-4633-8f09-12865f2e3772","Type":"ContainerStarted","Data":"5c5c1fcd3ea2d15a8d7176fb420a3e76b9f5645f931dd43e77f985793336981b"} Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.567871 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h6d7m" event={"ID":"caf441af-cd19-416e-9759-8634523c0979","Type":"ContainerStarted","Data":"6716d3f3ec03b19e57a35133a801ce1bae9aebc742544db97a4df7f98df283b3"} Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.569691 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" event={"ID":"6f1f4bff-0c57-4619-bc07-90aec0cc064c","Type":"ContainerStarted","Data":"dfa912c24256523488c18522646d8688fbec68bcd0e07744880e83ddb7a23cd9"} Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.580208 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" podStartSLOduration=3.580189325 podStartE2EDuration="3.580189325s" podCreationTimestamp="2025-10-06 15:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:48.576177649 +0000 UTC m=+1068.388528367" watchObservedRunningTime="2025-10-06 15:18:48.580189325 +0000 UTC m=+1068.392540043" Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.605737 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b6957c776-hxrm5" podStartSLOduration=4.554495552 podStartE2EDuration="7.605701679s" podCreationTimestamp="2025-10-06 15:18:41 +0000 UTC" firstStartedPulling="2025-10-06 15:18:43.859469757 +0000 UTC m=+1063.671820475" lastFinishedPulling="2025-10-06 15:18:46.910675874 +0000 UTC m=+1066.723026602" observedRunningTime="2025-10-06 15:18:48.600613609 +0000 UTC m=+1068.412964327" watchObservedRunningTime="2025-10-06 15:18:48.605701679 +0000 UTC m=+1068.418052397" Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.650719 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-h6d7m" podStartSLOduration=10.068590903 podStartE2EDuration="1m2.650696356s" podCreationTimestamp="2025-10-06 15:17:46 +0000 UTC" firstStartedPulling="2025-10-06 15:17:54.343535532 +0000 UTC m=+1014.155886250" lastFinishedPulling="2025-10-06 15:18:46.925640985 +0000 UTC m=+1066.737991703" observedRunningTime="2025-10-06 15:18:48.642041533 +0000 UTC m=+1068.454392271" watchObservedRunningTime="2025-10-06 15:18:48.650696356 +0000 UTC m=+1068.463047074" Oct 06 15:18:48 crc kubenswrapper[4888]: I1006 15:18:48.662527 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-67fcdb97f5-n5qtk" podStartSLOduration=4.378438326 podStartE2EDuration="7.662500128s" podCreationTimestamp="2025-10-06 15:18:41 +0000 UTC" firstStartedPulling="2025-10-06 15:18:43.640721927 +0000 UTC m=+1063.453072645" lastFinishedPulling="2025-10-06 15:18:46.924783729 +0000 UTC m=+1066.737134447" observedRunningTime="2025-10-06 15:18:48.658194182 +0000 UTC m=+1068.470544920" watchObservedRunningTime="2025-10-06 15:18:48.662500128 +0000 UTC m=+1068.474850846" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.439382 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.528169 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmgvk\" (UniqueName: \"kubernetes.io/projected/2ceb4186-79b8-4dc6-b54c-7e0681764d35-kube-api-access-rmgvk\") pod \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.528273 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-sg-core-conf-yaml\") pod \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.528325 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-run-httpd\") pod \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.528397 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-config-data\") pod \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.528480 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-scripts\") pod \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.528509 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-combined-ca-bundle\") pod \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.528546 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-log-httpd\") pod \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\" (UID: \"2ceb4186-79b8-4dc6-b54c-7e0681764d35\") " Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.529446 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2ceb4186-79b8-4dc6-b54c-7e0681764d35" (UID: "2ceb4186-79b8-4dc6-b54c-7e0681764d35"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.533118 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2ceb4186-79b8-4dc6-b54c-7e0681764d35" (UID: "2ceb4186-79b8-4dc6-b54c-7e0681764d35"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.545369 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ceb4186-79b8-4dc6-b54c-7e0681764d35-kube-api-access-rmgvk" (OuterVolumeSpecName: "kube-api-access-rmgvk") pod "2ceb4186-79b8-4dc6-b54c-7e0681764d35" (UID: "2ceb4186-79b8-4dc6-b54c-7e0681764d35"). InnerVolumeSpecName "kube-api-access-rmgvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.552727 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2ceb4186-79b8-4dc6-b54c-7e0681764d35" (UID: "2ceb4186-79b8-4dc6-b54c-7e0681764d35"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.568937 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-scripts" (OuterVolumeSpecName: "scripts") pod "2ceb4186-79b8-4dc6-b54c-7e0681764d35" (UID: "2ceb4186-79b8-4dc6-b54c-7e0681764d35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.600649 4888 generic.go:334] "Generic (PLEG): container finished" podID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerID="e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24" exitCode=0 Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.600899 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ceb4186-79b8-4dc6-b54c-7e0681764d35","Type":"ContainerDied","Data":"e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24"} Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.600956 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ceb4186-79b8-4dc6-b54c-7e0681764d35","Type":"ContainerDied","Data":"c7c088164569f8789e4944c0a0ba70268ceacca5749c6aa18009b7d23801e3ac"} Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.600975 4888 scope.go:117] "RemoveContainer" containerID="c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.601809 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.616561 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ceb4186-79b8-4dc6-b54c-7e0681764d35" (UID: "2ceb4186-79b8-4dc6-b54c-7e0681764d35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.634584 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.634622 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.634634 4888 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.634646 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmgvk\" (UniqueName: \"kubernetes.io/projected/2ceb4186-79b8-4dc6-b54c-7e0681764d35-kube-api-access-rmgvk\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.634657 4888 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.634667 4888 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ceb4186-79b8-4dc6-b54c-7e0681764d35-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.711065 4888 scope.go:117] "RemoveContainer" containerID="e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.713788 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-config-data" (OuterVolumeSpecName: "config-data") pod "2ceb4186-79b8-4dc6-b54c-7e0681764d35" (UID: "2ceb4186-79b8-4dc6-b54c-7e0681764d35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.739524 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ceb4186-79b8-4dc6-b54c-7e0681764d35-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.759288 4888 scope.go:117] "RemoveContainer" containerID="c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065" Oct 06 15:18:49 crc kubenswrapper[4888]: E1006 15:18:49.759718 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065\": container with ID starting with c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065 not found: ID does not exist" containerID="c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.759765 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065"} err="failed to get container status \"c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065\": rpc error: code = NotFound desc = could not find container \"c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065\": container with ID starting with c82c991cc9df3eaa1ec2d8caa9d0fe7941ff3c73952ec713bc4ffee5b6572065 not found: ID does not exist" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.760042 4888 scope.go:117] "RemoveContainer" containerID="e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24" Oct 06 15:18:49 crc kubenswrapper[4888]: E1006 15:18:49.764267 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24\": container with ID starting with e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24 not found: ID does not exist" containerID="e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.764323 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24"} err="failed to get container status \"e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24\": rpc error: code = NotFound desc = could not find container \"e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24\": container with ID starting with e43aa537a7a7ad1b26340f45ed6adb42e56cf5f13ecc3df34da941bd2cff3e24 not found: ID does not exist" Oct 06 15:18:49 crc kubenswrapper[4888]: I1006 15:18:49.987742 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.007862 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.015252 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:18:50 crc kubenswrapper[4888]: E1006 15:18:50.016918 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerName="ceilometer-notification-agent" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.016952 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerName="ceilometer-notification-agent" Oct 06 15:18:50 crc kubenswrapper[4888]: E1006 15:18:50.016975 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerName="proxy-httpd" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.016984 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerName="proxy-httpd" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.017545 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerName="ceilometer-notification-agent" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.017578 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" containerName="proxy-httpd" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.059370 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.059521 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.065185 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.065347 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.152503 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.152537 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.152560 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-config-data\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.152638 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-run-httpd\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.152658 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hqkj\" (UniqueName: \"kubernetes.io/projected/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-kube-api-access-8hqkj\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.152681 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-log-httpd\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.152709 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-scripts\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254154 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-log-httpd\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254229 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-scripts\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254338 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254374 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254397 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-config-data\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254445 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-run-httpd\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254483 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hqkj\" (UniqueName: \"kubernetes.io/projected/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-kube-api-access-8hqkj\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254663 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-log-httpd\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.254952 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-run-httpd\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.260939 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.261381 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.261453 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-scripts\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.262999 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-config-data\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.272094 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hqkj\" (UniqueName: \"kubernetes.io/projected/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-kube-api-access-8hqkj\") pod \"ceilometer-0\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.397976 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.933875 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ceb4186-79b8-4dc6-b54c-7e0681764d35" path="/var/lib/kubelet/pods/2ceb4186-79b8-4dc6-b54c-7e0681764d35/volumes" Oct 06 15:18:50 crc kubenswrapper[4888]: I1006 15:18:50.952946 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:18:51 crc kubenswrapper[4888]: I1006 15:18:51.633433 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerStarted","Data":"1de6919f8bc3bfa8ab8e1731fb5b8fc8a6e74ace18f8c57d6f6ea327ec89d669"} Oct 06 15:18:52 crc kubenswrapper[4888]: I1006 15:18:52.095987 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:18:52 crc kubenswrapper[4888]: I1006 15:18:52.183913 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8kzqg"] Oct 06 15:18:52 crc kubenswrapper[4888]: I1006 15:18:52.184155 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" podUID="4a4b9805-b00b-4f77-9df2-4e93a713e673" containerName="dnsmasq-dns" containerID="cri-o://6a6154567e8a6e33eedfde4e3c07bab975dabaa0e6222049efaaf552c48418c9" gracePeriod=10 Oct 06 15:18:52 crc kubenswrapper[4888]: I1006 15:18:52.665935 4888 generic.go:334] "Generic (PLEG): container finished" podID="4a4b9805-b00b-4f77-9df2-4e93a713e673" containerID="6a6154567e8a6e33eedfde4e3c07bab975dabaa0e6222049efaaf552c48418c9" exitCode=0 Oct 06 15:18:52 crc kubenswrapper[4888]: I1006 15:18:52.666287 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" event={"ID":"4a4b9805-b00b-4f77-9df2-4e93a713e673","Type":"ContainerDied","Data":"6a6154567e8a6e33eedfde4e3c07bab975dabaa0e6222049efaaf552c48418c9"} Oct 06 15:18:52 crc kubenswrapper[4888]: I1006 15:18:52.684947 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerStarted","Data":"5f0704593ba51b549e97337e3c99576b5ece80de48f5bc25ee7936baac24da15"} Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.230400 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.335241 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-svc\") pod \"4a4b9805-b00b-4f77-9df2-4e93a713e673\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.335350 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-config\") pod \"4a4b9805-b00b-4f77-9df2-4e93a713e673\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.335374 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-nb\") pod \"4a4b9805-b00b-4f77-9df2-4e93a713e673\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.335400 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-sb\") pod \"4a4b9805-b00b-4f77-9df2-4e93a713e673\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.335428 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-swift-storage-0\") pod \"4a4b9805-b00b-4f77-9df2-4e93a713e673\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.335512 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45jqp\" (UniqueName: \"kubernetes.io/projected/4a4b9805-b00b-4f77-9df2-4e93a713e673-kube-api-access-45jqp\") pod \"4a4b9805-b00b-4f77-9df2-4e93a713e673\" (UID: \"4a4b9805-b00b-4f77-9df2-4e93a713e673\") " Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.345473 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4b9805-b00b-4f77-9df2-4e93a713e673-kube-api-access-45jqp" (OuterVolumeSpecName: "kube-api-access-45jqp") pod "4a4b9805-b00b-4f77-9df2-4e93a713e673" (UID: "4a4b9805-b00b-4f77-9df2-4e93a713e673"). InnerVolumeSpecName "kube-api-access-45jqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.416791 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a4b9805-b00b-4f77-9df2-4e93a713e673" (UID: "4a4b9805-b00b-4f77-9df2-4e93a713e673"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.439342 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.439378 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45jqp\" (UniqueName: \"kubernetes.io/projected/4a4b9805-b00b-4f77-9df2-4e93a713e673-kube-api-access-45jqp\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.452320 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a4b9805-b00b-4f77-9df2-4e93a713e673" (UID: "4a4b9805-b00b-4f77-9df2-4e93a713e673"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.473236 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-config" (OuterVolumeSpecName: "config") pod "4a4b9805-b00b-4f77-9df2-4e93a713e673" (UID: "4a4b9805-b00b-4f77-9df2-4e93a713e673"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.487328 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a4b9805-b00b-4f77-9df2-4e93a713e673" (UID: "4a4b9805-b00b-4f77-9df2-4e93a713e673"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.504293 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a4b9805-b00b-4f77-9df2-4e93a713e673" (UID: "4a4b9805-b00b-4f77-9df2-4e93a713e673"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.541175 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.541228 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.541249 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.541262 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a4b9805-b00b-4f77-9df2-4e93a713e673-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.723357 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" event={"ID":"4a4b9805-b00b-4f77-9df2-4e93a713e673","Type":"ContainerDied","Data":"d3b0adbdfd257b0620792699022012151a1f9abf6ea8d59f732b8b24520f87b9"} Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.723413 4888 scope.go:117] "RemoveContainer" containerID="6a6154567e8a6e33eedfde4e3c07bab975dabaa0e6222049efaaf552c48418c9" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.723572 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8kzqg" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.784223 4888 scope.go:117] "RemoveContainer" containerID="f3fc45b407916b9417eb0de175fe7d083bb034f4e52c8b1462d43c2c72f39d54" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.784372 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerStarted","Data":"b22c1d48440e8b68055b8df22c81604c7040dcefb57b141d6fb48cec4b96bf74"} Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.846184 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8kzqg"] Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.865671 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8kzqg"] Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.934065 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5786cd685c-vnwnq"] Oct 06 15:18:53 crc kubenswrapper[4888]: E1006 15:18:53.934440 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4b9805-b00b-4f77-9df2-4e93a713e673" containerName="init" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.934453 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4b9805-b00b-4f77-9df2-4e93a713e673" containerName="init" Oct 06 15:18:53 crc kubenswrapper[4888]: E1006 15:18:53.934486 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4b9805-b00b-4f77-9df2-4e93a713e673" containerName="dnsmasq-dns" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.934494 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4b9805-b00b-4f77-9df2-4e93a713e673" containerName="dnsmasq-dns" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.934674 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4b9805-b00b-4f77-9df2-4e93a713e673" containerName="dnsmasq-dns" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.935587 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.941241 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.941997 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.942207 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 06 15:18:53 crc kubenswrapper[4888]: I1006 15:18:53.974501 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5786cd685c-vnwnq"] Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.055367 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-internal-tls-certs\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.055453 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-public-tls-certs\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.055474 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqkz\" (UniqueName: \"kubernetes.io/projected/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-kube-api-access-fhqkz\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.055524 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-combined-ca-bundle\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.055545 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-config-data\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.055604 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-run-httpd\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.055636 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-etc-swift\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.055669 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-log-httpd\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158097 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-run-httpd\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158152 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-etc-swift\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158184 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-log-httpd\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158209 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-internal-tls-certs\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158281 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-public-tls-certs\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158304 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqkz\" (UniqueName: \"kubernetes.io/projected/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-kube-api-access-fhqkz\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158352 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-combined-ca-bundle\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158371 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-config-data\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.158724 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-run-httpd\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.159572 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-log-httpd\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.167705 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-internal-tls-certs\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.168028 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-public-tls-certs\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.173919 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-combined-ca-bundle\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.174940 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-config-data\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.181160 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-etc-swift\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.189979 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqkz\" (UniqueName: \"kubernetes.io/projected/7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9-kube-api-access-fhqkz\") pod \"swift-proxy-5786cd685c-vnwnq\" (UID: \"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9\") " pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.373510 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.811647 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerStarted","Data":"6e6afea0019eec19f6e27371d4a4a2a101dc77558e2a5567cc1c156ddbd5569a"} Oct 06 15:18:54 crc kubenswrapper[4888]: I1006 15:18:54.949140 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4b9805-b00b-4f77-9df2-4e93a713e673" path="/var/lib/kubelet/pods/4a4b9805-b00b-4f77-9df2-4e93a713e673/volumes" Oct 06 15:18:55 crc kubenswrapper[4888]: I1006 15:18:55.180713 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5786cd685c-vnwnq"] Oct 06 15:18:55 crc kubenswrapper[4888]: I1006 15:18:55.181152 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-645dff855d-nzssq" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 15:18:55 crc kubenswrapper[4888]: I1006 15:18:55.834338 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5786cd685c-vnwnq" event={"ID":"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9","Type":"ContainerStarted","Data":"8816b015e1f30676ab8f967c1c789a2fd2376bc90b1b9e6abb7810cdca4faa3d"} Oct 06 15:18:55 crc kubenswrapper[4888]: I1006 15:18:55.834676 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5786cd685c-vnwnq" event={"ID":"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9","Type":"ContainerStarted","Data":"4fb91d90be7d44f41c5471ea2d4238bb56fc4307d8872ba12bf1779ddb437895"} Oct 06 15:18:55 crc kubenswrapper[4888]: I1006 15:18:55.834690 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5786cd685c-vnwnq" event={"ID":"7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9","Type":"ContainerStarted","Data":"1d39ffb672d23e487ad5b6abd7b7cf31221c4beff067f8b188871c43ac5db842"} Oct 06 15:18:55 crc kubenswrapper[4888]: I1006 15:18:55.835077 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:55 crc kubenswrapper[4888]: I1006 15:18:55.835117 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:18:56 crc kubenswrapper[4888]: I1006 15:18:56.537389 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:56 crc kubenswrapper[4888]: I1006 15:18:56.564558 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5786cd685c-vnwnq" podStartSLOduration=3.564537961 podStartE2EDuration="3.564537961s" podCreationTimestamp="2025-10-06 15:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:18:55.865829246 +0000 UTC m=+1075.678179984" watchObservedRunningTime="2025-10-06 15:18:56.564537961 +0000 UTC m=+1076.376888679" Oct 06 15:18:56 crc kubenswrapper[4888]: I1006 15:18:56.804254 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Oct 06 15:18:57 crc kubenswrapper[4888]: I1006 15:18:57.103462 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:18:57 crc kubenswrapper[4888]: I1006 15:18:57.857453 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerStarted","Data":"b7834f1a4606ddad5dd349ec6de485c5a61914966272a409cedca4b70bcf966d"} Oct 06 15:18:57 crc kubenswrapper[4888]: I1006 15:18:57.858237 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 15:18:57 crc kubenswrapper[4888]: I1006 15:18:57.894782 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.824290269 podStartE2EDuration="8.894761748s" podCreationTimestamp="2025-10-06 15:18:49 +0000 UTC" firstStartedPulling="2025-10-06 15:18:50.987543605 +0000 UTC m=+1070.799894323" lastFinishedPulling="2025-10-06 15:18:57.058015084 +0000 UTC m=+1076.870365802" observedRunningTime="2025-10-06 15:18:57.876553044 +0000 UTC m=+1077.688903762" watchObservedRunningTime="2025-10-06 15:18:57.894761748 +0000 UTC m=+1077.707112466" Oct 06 15:18:58 crc kubenswrapper[4888]: I1006 15:18:58.806232 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:18:59 crc kubenswrapper[4888]: I1006 15:18:59.881030 4888 generic.go:334] "Generic (PLEG): container finished" podID="caf441af-cd19-416e-9759-8634523c0979" containerID="6716d3f3ec03b19e57a35133a801ce1bae9aebc742544db97a4df7f98df283b3" exitCode=0 Oct 06 15:18:59 crc kubenswrapper[4888]: I1006 15:18:59.881109 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h6d7m" event={"ID":"caf441af-cd19-416e-9759-8634523c0979","Type":"ContainerDied","Data":"6716d3f3ec03b19e57a35133a801ce1bae9aebc742544db97a4df7f98df283b3"} Oct 06 15:18:59 crc kubenswrapper[4888]: I1006 15:18:59.882842 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="ceilometer-central-agent" containerID="cri-o://5f0704593ba51b549e97337e3c99576b5ece80de48f5bc25ee7936baac24da15" gracePeriod=30 Oct 06 15:18:59 crc kubenswrapper[4888]: I1006 15:18:59.882864 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="sg-core" containerID="cri-o://6e6afea0019eec19f6e27371d4a4a2a101dc77558e2a5567cc1c156ddbd5569a" gracePeriod=30 Oct 06 15:18:59 crc kubenswrapper[4888]: I1006 15:18:59.882889 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="ceilometer-notification-agent" containerID="cri-o://b22c1d48440e8b68055b8df22c81604c7040dcefb57b141d6fb48cec4b96bf74" gracePeriod=30 Oct 06 15:18:59 crc kubenswrapper[4888]: I1006 15:18:59.882884 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="proxy-httpd" containerID="cri-o://b7834f1a4606ddad5dd349ec6de485c5a61914966272a409cedca4b70bcf966d" gracePeriod=30 Oct 06 15:18:59 crc kubenswrapper[4888]: I1006 15:18:59.908492 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" podUID="0696f900-55ec-420c-a00a-a8e749b36aa0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 15:18:59 crc kubenswrapper[4888]: I1006 15:18:59.909169 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" podUID="0696f900-55ec-420c-a00a-a8e749b36aa0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.396131 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.549338 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fc6dd5fdd-d2l7d" Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.628225 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-645dff855d-nzssq"] Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.628451 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-645dff855d-nzssq" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api-log" containerID="cri-o://270ee48e80e0db6b923007f2dca28a7a3f56dfb776d4946e765971cfcb203e78" gracePeriod=30 Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.629144 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-645dff855d-nzssq" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api" containerID="cri-o://788feb770f48bdc17465c33dcd2ac15c70dfc2d54f787fb66fbe42f218b62809" gracePeriod=30 Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.667642 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-645dff855d-nzssq" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": EOF" Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.903896 4888 generic.go:334] "Generic (PLEG): container finished" podID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerID="270ee48e80e0db6b923007f2dca28a7a3f56dfb776d4946e765971cfcb203e78" exitCode=143 Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.904241 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-645dff855d-nzssq" event={"ID":"780d72b4-5817-49d7-bca7-4eca7daf63df","Type":"ContainerDied","Data":"270ee48e80e0db6b923007f2dca28a7a3f56dfb776d4946e765971cfcb203e78"} Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.910150 4888 generic.go:334] "Generic (PLEG): container finished" podID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerID="b7834f1a4606ddad5dd349ec6de485c5a61914966272a409cedca4b70bcf966d" exitCode=0 Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.910181 4888 generic.go:334] "Generic (PLEG): container finished" podID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerID="6e6afea0019eec19f6e27371d4a4a2a101dc77558e2a5567cc1c156ddbd5569a" exitCode=2 Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.910192 4888 generic.go:334] "Generic (PLEG): container finished" podID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerID="b22c1d48440e8b68055b8df22c81604c7040dcefb57b141d6fb48cec4b96bf74" exitCode=0 Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.911477 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerDied","Data":"b7834f1a4606ddad5dd349ec6de485c5a61914966272a409cedca4b70bcf966d"} Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.911529 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerDied","Data":"6e6afea0019eec19f6e27371d4a4a2a101dc77558e2a5567cc1c156ddbd5569a"} Oct 06 15:19:00 crc kubenswrapper[4888]: I1006 15:19:00.911542 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerDied","Data":"b22c1d48440e8b68055b8df22c81604c7040dcefb57b141d6fb48cec4b96bf74"} Oct 06 15:19:01 crc kubenswrapper[4888]: I1006 15:19:01.930956 4888 generic.go:334] "Generic (PLEG): container finished" podID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerID="5f0704593ba51b549e97337e3c99576b5ece80de48f5bc25ee7936baac24da15" exitCode=0 Oct 06 15:19:01 crc kubenswrapper[4888]: I1006 15:19:01.930994 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerDied","Data":"5f0704593ba51b549e97337e3c99576b5ece80de48f5bc25ee7936baac24da15"} Oct 06 15:19:02 crc kubenswrapper[4888]: I1006 15:19:02.188641 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:19:02 crc kubenswrapper[4888]: I1006 15:19:02.188906 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerName="glance-log" containerID="cri-o://ca40c156cdbef7ea93804d2c29173089b45f33f039e87e8028f32b33b004ac5e" gracePeriod=30 Oct 06 15:19:02 crc kubenswrapper[4888]: I1006 15:19:02.189002 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerName="glance-httpd" containerID="cri-o://8f23fa36485dcf01f41482fad1ff980fdbf482b390b2ff18760b42fafd1ee635" gracePeriod=30 Oct 06 15:19:02 crc kubenswrapper[4888]: I1006 15:19:02.563387 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:19:02 crc kubenswrapper[4888]: I1006 15:19:02.563446 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:19:02 crc kubenswrapper[4888]: I1006 15:19:02.739120 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:19:02 crc kubenswrapper[4888]: I1006 15:19:02.991626 4888 generic.go:334] "Generic (PLEG): container finished" podID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerID="ca40c156cdbef7ea93804d2c29173089b45f33f039e87e8028f32b33b004ac5e" exitCode=143 Oct 06 15:19:02 crc kubenswrapper[4888]: I1006 15:19:02.991688 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5","Type":"ContainerDied","Data":"ca40c156cdbef7ea93804d2c29173089b45f33f039e87e8028f32b33b004ac5e"} Oct 06 15:19:04 crc kubenswrapper[4888]: I1006 15:19:04.387866 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:19:04 crc kubenswrapper[4888]: I1006 15:19:04.393313 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5786cd685c-vnwnq" Oct 06 15:19:04 crc kubenswrapper[4888]: I1006 15:19:04.451104 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:19:04 crc kubenswrapper[4888]: I1006 15:19:04.451562 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerName="glance-log" containerID="cri-o://a65899bb3278b329aebbafd8e058151b8de4c4f5b2a405841f698cbcd9a6c1b2" gracePeriod=30 Oct 06 15:19:04 crc kubenswrapper[4888]: I1006 15:19:04.451737 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerName="glance-httpd" containerID="cri-o://da15d24284cb0c8bc8f729e022d0e8ba2eb02f6fb6cfd2c76a663fad85f1d379" gracePeriod=30 Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.029322 4888 generic.go:334] "Generic (PLEG): container finished" podID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerID="2df7a1bb23460b3296677c8de731421c3f3ab7fd0970a63e26450d729add3c3c" exitCode=137 Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.029490 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffc855b96-nhf9w" event={"ID":"0574c745-cac5-4deb-87cc-a04c1b09aa9a","Type":"ContainerDied","Data":"2df7a1bb23460b3296677c8de731421c3f3ab7fd0970a63e26450d729add3c3c"} Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.035243 4888 generic.go:334] "Generic (PLEG): container finished" podID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerID="a65899bb3278b329aebbafd8e058151b8de4c4f5b2a405841f698cbcd9a6c1b2" exitCode=143 Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.035344 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78d84d66-314f-4338-ab86-206b0db9f5b9","Type":"ContainerDied","Data":"a65899bb3278b329aebbafd8e058151b8de4c4f5b2a405841f698cbcd9a6c1b2"} Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.113178 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-645dff855d-nzssq" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:55782->10.217.0.161:9311: read: connection reset by peer" Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.113191 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-645dff855d-nzssq" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:55780->10.217.0.161:9311: read: connection reset by peer" Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.246075 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.247674 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9885565f4-9j2hk" Oct 06 15:19:05 crc kubenswrapper[4888]: I1006 15:19:05.991188 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7dc98fc94f-nlvnj" Oct 06 15:19:06 crc kubenswrapper[4888]: I1006 15:19:06.082080 4888 generic.go:334] "Generic (PLEG): container finished" podID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerID="788feb770f48bdc17465c33dcd2ac15c70dfc2d54f787fb66fbe42f218b62809" exitCode=0 Oct 06 15:19:06 crc kubenswrapper[4888]: I1006 15:19:06.082220 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-645dff855d-nzssq" event={"ID":"780d72b4-5817-49d7-bca7-4eca7daf63df","Type":"ContainerDied","Data":"788feb770f48bdc17465c33dcd2ac15c70dfc2d54f787fb66fbe42f218b62809"} Oct 06 15:19:06 crc kubenswrapper[4888]: I1006 15:19:06.092494 4888 generic.go:334] "Generic (PLEG): container finished" podID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerID="8f23fa36485dcf01f41482fad1ff980fdbf482b390b2ff18760b42fafd1ee635" exitCode=0 Oct 06 15:19:06 crc kubenswrapper[4888]: I1006 15:19:06.093148 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5","Type":"ContainerDied","Data":"8f23fa36485dcf01f41482fad1ff980fdbf482b390b2ff18760b42fafd1ee635"} Oct 06 15:19:06 crc kubenswrapper[4888]: I1006 15:19:06.094752 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6896cb7668-gxlzd"] Oct 06 15:19:06 crc kubenswrapper[4888]: I1006 15:19:06.095003 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6896cb7668-gxlzd" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerName="neutron-httpd" containerID="cri-o://54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23" gracePeriod=30 Oct 06 15:19:06 crc kubenswrapper[4888]: I1006 15:19:06.097517 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6896cb7668-gxlzd" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerName="neutron-api" containerID="cri-o://985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe" gracePeriod=30 Oct 06 15:19:06 crc kubenswrapper[4888]: I1006 15:19:06.803189 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-ffc855b96-nhf9w" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Oct 06 15:19:07 crc kubenswrapper[4888]: I1006 15:19:07.103216 4888 generic.go:334] "Generic (PLEG): container finished" podID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerID="54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23" exitCode=0 Oct 06 15:19:07 crc kubenswrapper[4888]: I1006 15:19:07.103281 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6896cb7668-gxlzd" event={"ID":"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606","Type":"ContainerDied","Data":"54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23"} Oct 06 15:19:07 crc kubenswrapper[4888]: I1006 15:19:07.121669 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-645dff855d-nzssq" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Oct 06 15:19:07 crc kubenswrapper[4888]: I1006 15:19:07.121771 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:19:07 crc kubenswrapper[4888]: I1006 15:19:07.121667 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-645dff855d-nzssq" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Oct 06 15:19:08 crc kubenswrapper[4888]: E1006 15:19:08.123576 4888 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78d84d66_314f_4338_ab86_206b0db9f5b9.slice/crio-conmon-da15d24284cb0c8bc8f729e022d0e8ba2eb02f6fb6cfd2c76a663fad85f1d379.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78d84d66_314f_4338_ab86_206b0db9f5b9.slice/crio-da15d24284cb0c8bc8f729e022d0e8ba2eb02f6fb6cfd2c76a663fad85f1d379.scope\": RecentStats: unable to find data in memory cache]" Oct 06 15:19:08 crc kubenswrapper[4888]: I1006 15:19:08.132474 4888 generic.go:334] "Generic (PLEG): container finished" podID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerID="da15d24284cb0c8bc8f729e022d0e8ba2eb02f6fb6cfd2c76a663fad85f1d379" exitCode=0 Oct 06 15:19:08 crc kubenswrapper[4888]: I1006 15:19:08.132522 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78d84d66-314f-4338-ab86-206b0db9f5b9","Type":"ContainerDied","Data":"da15d24284cb0c8bc8f729e022d0e8ba2eb02f6fb6cfd2c76a663fad85f1d379"} Oct 06 15:19:09 crc kubenswrapper[4888]: E1006 15:19:09.232764 4888 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Oct 06 15:19:09 crc kubenswrapper[4888]: E1006 15:19:09.233340 4888 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ffhd8h69h564h5b8h584hfdh566h5b8hfbh5d4h5ch59fh5dfh76h586hcfh57dh64ch85h585h658hffh64fh59fh687hc7h5b6hcbh59ch55dh64bq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kvzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(655523f3-6f3b-4675-8b5a-4c0451a185ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 15:19:09 crc kubenswrapper[4888]: E1006 15:19:09.235140 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="655523f3-6f3b-4675-8b5a-4c0451a185ca" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.456729 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.535417 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-config-data\") pod \"caf441af-cd19-416e-9759-8634523c0979\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.535463 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tzpd\" (UniqueName: \"kubernetes.io/projected/caf441af-cd19-416e-9759-8634523c0979-kube-api-access-4tzpd\") pod \"caf441af-cd19-416e-9759-8634523c0979\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.535645 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-db-sync-config-data\") pod \"caf441af-cd19-416e-9759-8634523c0979\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.535716 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-combined-ca-bundle\") pod \"caf441af-cd19-416e-9759-8634523c0979\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.535758 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/caf441af-cd19-416e-9759-8634523c0979-etc-machine-id\") pod \"caf441af-cd19-416e-9759-8634523c0979\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.535777 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-scripts\") pod \"caf441af-cd19-416e-9759-8634523c0979\" (UID: \"caf441af-cd19-416e-9759-8634523c0979\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.541788 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/caf441af-cd19-416e-9759-8634523c0979-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "caf441af-cd19-416e-9759-8634523c0979" (UID: "caf441af-cd19-416e-9759-8634523c0979"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.547084 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-scripts" (OuterVolumeSpecName: "scripts") pod "caf441af-cd19-416e-9759-8634523c0979" (UID: "caf441af-cd19-416e-9759-8634523c0979"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.549343 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf441af-cd19-416e-9759-8634523c0979-kube-api-access-4tzpd" (OuterVolumeSpecName: "kube-api-access-4tzpd") pod "caf441af-cd19-416e-9759-8634523c0979" (UID: "caf441af-cd19-416e-9759-8634523c0979"). InnerVolumeSpecName "kube-api-access-4tzpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.551937 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "caf441af-cd19-416e-9759-8634523c0979" (UID: "caf441af-cd19-416e-9759-8634523c0979"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.621841 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caf441af-cd19-416e-9759-8634523c0979" (UID: "caf441af-cd19-416e-9759-8634523c0979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.637720 4888 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.637757 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.637771 4888 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/caf441af-cd19-416e-9759-8634523c0979-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.637786 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.637875 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tzpd\" (UniqueName: \"kubernetes.io/projected/caf441af-cd19-416e-9759-8634523c0979-kube-api-access-4tzpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.705733 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-config-data" (OuterVolumeSpecName: "config-data") pod "caf441af-cd19-416e-9759-8634523c0979" (UID: "caf441af-cd19-416e-9759-8634523c0979"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.739828 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf441af-cd19-416e-9759-8634523c0979-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.765971 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.846261 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.855641 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-run-httpd\") pod \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.855748 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-config-data\") pod \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.855772 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-log-httpd\") pod \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.855803 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-combined-ca-bundle\") pod \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.855858 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-scripts\") pod \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.861259 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "885b6262-4e41-4b46-9a4f-f8a2d0123cc7" (UID: "885b6262-4e41-4b46-9a4f-f8a2d0123cc7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.861665 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "885b6262-4e41-4b46-9a4f-f8a2d0123cc7" (UID: "885b6262-4e41-4b46-9a4f-f8a2d0123cc7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.865705 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hqkj\" (UniqueName: \"kubernetes.io/projected/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-kube-api-access-8hqkj\") pod \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.865771 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-sg-core-conf-yaml\") pod \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\" (UID: \"885b6262-4e41-4b46-9a4f-f8a2d0123cc7\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.866410 4888 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.866422 4888 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.910230 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-scripts" (OuterVolumeSpecName: "scripts") pod "885b6262-4e41-4b46-9a4f-f8a2d0123cc7" (UID: "885b6262-4e41-4b46-9a4f-f8a2d0123cc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.920495 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-kube-api-access-8hqkj" (OuterVolumeSpecName: "kube-api-access-8hqkj") pod "885b6262-4e41-4b46-9a4f-f8a2d0123cc7" (UID: "885b6262-4e41-4b46-9a4f-f8a2d0123cc7"). InnerVolumeSpecName "kube-api-access-8hqkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.962025 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.971723 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-scripts\") pod \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " Oct 06 15:19:09 crc kubenswrapper[4888]: I1006 15:19:09.980401 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "885b6262-4e41-4b46-9a4f-f8a2d0123cc7" (UID: "885b6262-4e41-4b46-9a4f-f8a2d0123cc7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.012040 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-tls-certs\") pod \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.012119 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-secret-key\") pod \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.012231 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0574c745-cac5-4deb-87cc-a04c1b09aa9a-logs\") pod \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.012312 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms462\" (UniqueName: \"kubernetes.io/projected/0574c745-cac5-4deb-87cc-a04c1b09aa9a-kube-api-access-ms462\") pod \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.012343 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-combined-ca-bundle\") pod \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.012399 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-config-data\") pod \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\" (UID: \"0574c745-cac5-4deb-87cc-a04c1b09aa9a\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.027619 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.027668 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hqkj\" (UniqueName: \"kubernetes.io/projected/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-kube-api-access-8hqkj\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.027716 4888 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.035754 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0574c745-cac5-4deb-87cc-a04c1b09aa9a-logs" (OuterVolumeSpecName: "logs") pod "0574c745-cac5-4deb-87cc-a04c1b09aa9a" (UID: "0574c745-cac5-4deb-87cc-a04c1b09aa9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.039605 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-scripts" (OuterVolumeSpecName: "scripts") pod "0574c745-cac5-4deb-87cc-a04c1b09aa9a" (UID: "0574c745-cac5-4deb-87cc-a04c1b09aa9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.058481 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0574c745-cac5-4deb-87cc-a04c1b09aa9a-kube-api-access-ms462" (OuterVolumeSpecName: "kube-api-access-ms462") pod "0574c745-cac5-4deb-87cc-a04c1b09aa9a" (UID: "0574c745-cac5-4deb-87cc-a04c1b09aa9a"). InnerVolumeSpecName "kube-api-access-ms462". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.058609 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0574c745-cac5-4deb-87cc-a04c1b09aa9a" (UID: "0574c745-cac5-4deb-87cc-a04c1b09aa9a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.068363 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-config-data" (OuterVolumeSpecName: "config-data") pod "0574c745-cac5-4deb-87cc-a04c1b09aa9a" (UID: "0574c745-cac5-4deb-87cc-a04c1b09aa9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.087386 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "885b6262-4e41-4b46-9a4f-f8a2d0123cc7" (UID: "885b6262-4e41-4b46-9a4f-f8a2d0123cc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.096170 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "0574c745-cac5-4deb-87cc-a04c1b09aa9a" (UID: "0574c745-cac5-4deb-87cc-a04c1b09aa9a"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.116171 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-config-data" (OuterVolumeSpecName: "config-data") pod "885b6262-4e41-4b46-9a4f-f8a2d0123cc7" (UID: "885b6262-4e41-4b46-9a4f-f8a2d0123cc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.118330 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0574c745-cac5-4deb-87cc-a04c1b09aa9a" (UID: "0574c745-cac5-4deb-87cc-a04c1b09aa9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.128645 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/780d72b4-5817-49d7-bca7-4eca7daf63df-logs\") pod \"780d72b4-5817-49d7-bca7-4eca7daf63df\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.128735 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4klt7\" (UniqueName: \"kubernetes.io/projected/780d72b4-5817-49d7-bca7-4eca7daf63df-kube-api-access-4klt7\") pod \"780d72b4-5817-49d7-bca7-4eca7daf63df\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.128808 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data-custom\") pod \"780d72b4-5817-49d7-bca7-4eca7daf63df\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.128902 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data\") pod \"780d72b4-5817-49d7-bca7-4eca7daf63df\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.128956 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-combined-ca-bundle\") pod \"780d72b4-5817-49d7-bca7-4eca7daf63df\" (UID: \"780d72b4-5817-49d7-bca7-4eca7daf63df\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129679 4888 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129704 4888 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129716 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129728 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/885b6262-4e41-4b46-9a4f-f8a2d0123cc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129739 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0574c745-cac5-4deb-87cc-a04c1b09aa9a-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129750 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms462\" (UniqueName: \"kubernetes.io/projected/0574c745-cac5-4deb-87cc-a04c1b09aa9a-kube-api-access-ms462\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129761 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0574c745-cac5-4deb-87cc-a04c1b09aa9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129769 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.129778 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0574c745-cac5-4deb-87cc-a04c1b09aa9a-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.131692 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780d72b4-5817-49d7-bca7-4eca7daf63df-logs" (OuterVolumeSpecName: "logs") pod "780d72b4-5817-49d7-bca7-4eca7daf63df" (UID: "780d72b4-5817-49d7-bca7-4eca7daf63df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.138093 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.141938 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780d72b4-5817-49d7-bca7-4eca7daf63df-kube-api-access-4klt7" (OuterVolumeSpecName: "kube-api-access-4klt7") pod "780d72b4-5817-49d7-bca7-4eca7daf63df" (UID: "780d72b4-5817-49d7-bca7-4eca7daf63df"). InnerVolumeSpecName "kube-api-access-4klt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.145981 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "780d72b4-5817-49d7-bca7-4eca7daf63df" (UID: "780d72b4-5817-49d7-bca7-4eca7daf63df"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.169011 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "780d72b4-5817-49d7-bca7-4eca7daf63df" (UID: "780d72b4-5817-49d7-bca7-4eca7daf63df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.186413 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-645dff855d-nzssq" event={"ID":"780d72b4-5817-49d7-bca7-4eca7daf63df","Type":"ContainerDied","Data":"222dab747425b358a36171ba8fc00a21e1c88e92244b6e39fcfd623d05526f3d"} Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.186464 4888 scope.go:117] "RemoveContainer" containerID="788feb770f48bdc17465c33dcd2ac15c70dfc2d54f787fb66fbe42f218b62809" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.186575 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-645dff855d-nzssq" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.224385 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h6d7m" event={"ID":"caf441af-cd19-416e-9759-8634523c0979","Type":"ContainerDied","Data":"70ec273957b7c87730239d59de5e94672ceeb9cd833e59ab5f4023495ca2d6af"} Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.224428 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70ec273957b7c87730239d59de5e94672ceeb9cd833e59ab5f4023495ca2d6af" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.224544 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h6d7m" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.233459 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-scripts\") pod \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.233544 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.233585 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-public-tls-certs\") pod \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.233641 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-config-data\") pod \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.233741 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-logs\") pod \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.233839 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-httpd-run\") pod \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.233900 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-combined-ca-bundle\") pod \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.233942 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md4hk\" (UniqueName: \"kubernetes.io/projected/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-kube-api-access-md4hk\") pod \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\" (UID: \"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5\") " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.234402 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.234420 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/780d72b4-5817-49d7-bca7-4eca7daf63df-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.234433 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4klt7\" (UniqueName: \"kubernetes.io/projected/780d72b4-5817-49d7-bca7-4eca7daf63df-kube-api-access-4klt7\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.234445 4888 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.240388 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" (UID: "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.240650 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-logs" (OuterVolumeSpecName: "logs") pod "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" (UID: "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.275317 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-scripts" (OuterVolumeSpecName: "scripts") pod "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" (UID: "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.300137 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" (UID: "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.303358 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"885b6262-4e41-4b46-9a4f-f8a2d0123cc7","Type":"ContainerDied","Data":"1de6919f8bc3bfa8ab8e1731fb5b8fc8a6e74ace18f8c57d6f6ea327ec89d669"} Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.303489 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.323966 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ffc855b96-nhf9w" event={"ID":"0574c745-cac5-4deb-87cc-a04c1b09aa9a","Type":"ContainerDied","Data":"96a798007c9526c3fbc0c203fd8cab7b28b649f2e352cd7b53a65107bd1e47a3"} Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.324044 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ffc855b96-nhf9w" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.337002 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.337000 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e4bc3a00-34b4-4a6f-b5ac-297277e59bc5","Type":"ContainerDied","Data":"ebfc2c8564d0aff3b6ee2867e1ad4dcb73d31985b149891e4f2c3ce8f2b9c27f"} Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.340781 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="655523f3-6f3b-4675-8b5a-4c0451a185ca" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.343315 4888 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.352953 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.352985 4888 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.353001 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.344321 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data" (OuterVolumeSpecName: "config-data") pod "780d72b4-5817-49d7-bca7-4eca7daf63df" (UID: "780d72b4-5817-49d7-bca7-4eca7daf63df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.335161 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-kube-api-access-md4hk" (OuterVolumeSpecName: "kube-api-access-md4hk") pod "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" (UID: "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5"). InnerVolumeSpecName "kube-api-access-md4hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.392031 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" (UID: "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.408243 4888 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.442090 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-config-data" (OuterVolumeSpecName: "config-data") pod "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" (UID: "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.452109 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" (UID: "e4bc3a00-34b4-4a6f-b5ac-297277e59bc5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.454934 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780d72b4-5817-49d7-bca7-4eca7daf63df-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.454961 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.454971 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md4hk\" (UniqueName: \"kubernetes.io/projected/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-kube-api-access-md4hk\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.454983 4888 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.454995 4888 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.455006 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.566041 4888 scope.go:117] "RemoveContainer" containerID="270ee48e80e0db6b923007f2dca28a7a3f56dfb776d4946e765971cfcb203e78" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.595448 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-ffc855b96-nhf9w"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.612306 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-ffc855b96-nhf9w"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.618882 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-645dff855d-nzssq"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.620186 4888 scope.go:117] "RemoveContainer" containerID="b7834f1a4606ddad5dd349ec6de485c5a61914966272a409cedca4b70bcf966d" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.627695 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-645dff855d-nzssq"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.641750 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.647692 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.680554 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681181 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerName="glance-httpd" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681214 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerName="glance-httpd" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681226 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="ceilometer-notification-agent" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681232 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="ceilometer-notification-agent" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681246 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681251 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681269 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681274 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681518 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon-log" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681527 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon-log" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681550 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf441af-cd19-416e-9759-8634523c0979" containerName="cinder-db-sync" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681556 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf441af-cd19-416e-9759-8634523c0979" containerName="cinder-db-sync" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681566 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="ceilometer-central-agent" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681572 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="ceilometer-central-agent" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681582 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="sg-core" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681589 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="sg-core" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681604 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="proxy-httpd" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681611 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="proxy-httpd" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681620 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api-log" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681626 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api-log" Oct 06 15:19:10 crc kubenswrapper[4888]: E1006 15:19:10.681633 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerName="glance-log" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681639 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerName="glance-log" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681823 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681842 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="ceilometer-notification-agent" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681852 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="proxy-httpd" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681862 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf441af-cd19-416e-9759-8634523c0979" containerName="cinder-db-sync" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681890 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="sg-core" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681900 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681908 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerName="glance-log" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681915 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" containerName="barbican-api-log" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681925 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" containerName="ceilometer-central-agent" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681938 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" containerName="glance-httpd" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.681946 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" containerName="horizon-log" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.683009 4888 scope.go:117] "RemoveContainer" containerID="6e6afea0019eec19f6e27371d4a4a2a101dc77558e2a5567cc1c156ddbd5569a" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.684208 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.691539 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.691730 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.735767 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.736820 4888 scope.go:117] "RemoveContainer" containerID="b22c1d48440e8b68055b8df22c81604c7040dcefb57b141d6fb48cec4b96bf74" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.775870 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.784141 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.806344 4888 scope.go:117] "RemoveContainer" containerID="5f0704593ba51b549e97337e3c99576b5ece80de48f5bc25ee7936baac24da15" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.824868 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.826559 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.830824 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.831100 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-k2zw7" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.842186 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.862464 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-config-data\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.862827 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-scripts\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.862881 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zt74\" (UniqueName: \"kubernetes.io/projected/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-kube-api-access-8zt74\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.862908 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-log-httpd\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.862938 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-run-httpd\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.862985 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.863009 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.863072 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.866086 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.866906 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.869354 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.874627 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.874992 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.891290 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.892396 4888 scope.go:117] "RemoveContainer" containerID="81d5e6ec06435d3284bb89303c374e53eb6fe1e13976a3a09a106e21eef6b6f4" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.965616 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0574c745-cac5-4deb-87cc-a04c1b09aa9a" path="/var/lib/kubelet/pods/0574c745-cac5-4deb-87cc-a04c1b09aa9a/volumes" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.966947 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-scripts\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.966981 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-scripts\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967004 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-config-data\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967033 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5z78\" (UniqueName: \"kubernetes.io/projected/4bc16433-b83f-4a33-957a-3587ee2f7893-kube-api-access-t5z78\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967050 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zt74\" (UniqueName: \"kubernetes.io/projected/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-kube-api-access-8zt74\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967071 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-log-httpd\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967090 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967107 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca688e57-9165-4c51-8de7-366eebeb8596-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967123 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-run-httpd\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967137 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967157 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967189 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967203 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967224 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967242 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc16433-b83f-4a33-957a-3587ee2f7893-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967261 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967262 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="780d72b4-5817-49d7-bca7-4eca7daf63df" path="/var/lib/kubelet/pods/780d72b4-5817-49d7-bca7-4eca7daf63df/volumes" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967283 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvh6k\" (UniqueName: \"kubernetes.io/projected/ca688e57-9165-4c51-8de7-366eebeb8596-kube-api-access-pvh6k\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967313 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca688e57-9165-4c51-8de7-366eebeb8596-logs\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967336 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967368 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-scripts\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.967384 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-config-data\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.968788 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="885b6262-4e41-4b46-9a4f-f8a2d0123cc7" path="/var/lib/kubelet/pods/885b6262-4e41-4b46-9a4f-f8a2d0123cc7/volumes" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.970050 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4bc3a00-34b4-4a6f-b5ac-297277e59bc5" path="/var/lib/kubelet/pods/e4bc3a00-34b4-4a6f-b5ac-297277e59bc5/volumes" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.972738 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-r4vsc"] Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.979046 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.979370 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-log-httpd\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.979624 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-run-httpd\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.983556 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-config-data\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.984390 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:10 crc kubenswrapper[4888]: I1006 15:19:10.990032 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.001499 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-scripts\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.028466 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zt74\" (UniqueName: \"kubernetes.io/projected/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-kube-api-access-8zt74\") pod \"ceilometer-0\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " pod="openstack/ceilometer-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.032702 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-r4vsc"] Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.073962 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074011 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2cm\" (UniqueName: \"kubernetes.io/projected/e50b443c-75ad-4c36-871e-6d486f99d547-kube-api-access-kd2cm\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074037 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca688e57-9165-4c51-8de7-366eebeb8596-logs\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074073 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074120 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-scripts\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074158 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-scripts\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074186 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-config-data\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074211 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074253 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5z78\" (UniqueName: \"kubernetes.io/projected/4bc16433-b83f-4a33-957a-3587ee2f7893-kube-api-access-t5z78\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074279 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074308 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-config\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074345 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074374 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca688e57-9165-4c51-8de7-366eebeb8596-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074418 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074440 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074457 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-svc\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074510 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074529 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074548 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc16433-b83f-4a33-957a-3587ee2f7893-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.074575 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvh6k\" (UniqueName: \"kubernetes.io/projected/ca688e57-9165-4c51-8de7-366eebeb8596-kube-api-access-pvh6k\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.076354 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca688e57-9165-4c51-8de7-366eebeb8596-logs\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.076720 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.086458 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ca688e57-9165-4c51-8de7-366eebeb8596-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.091371 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc16433-b83f-4a33-957a-3587ee2f7893-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.095149 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.115870 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-config-data\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.117754 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-scripts\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.118518 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.123359 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.124300 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-scripts\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.124878 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca688e57-9165-4c51-8de7-366eebeb8596-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.132535 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvh6k\" (UniqueName: \"kubernetes.io/projected/ca688e57-9165-4c51-8de7-366eebeb8596-kube-api-access-pvh6k\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.132778 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.186700 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.186751 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-config\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.186808 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-svc\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.186890 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.186913 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd2cm\" (UniqueName: \"kubernetes.io/projected/e50b443c-75ad-4c36-871e-6d486f99d547-kube-api-access-kd2cm\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.187020 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.187771 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.194113 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.194942 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-config\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.196023 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.196204 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-svc\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.222500 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5z78\" (UniqueName: \"kubernetes.io/projected/4bc16433-b83f-4a33-957a-3587ee2f7893-kube-api-access-t5z78\") pod \"cinder-scheduler-0\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.229268 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd2cm\" (UniqueName: \"kubernetes.io/projected/e50b443c-75ad-4c36-871e-6d486f99d547-kube-api-access-kd2cm\") pod \"dnsmasq-dns-5784cf869f-r4vsc\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.310257 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.316401 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.318277 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.342665 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.360175 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.360653 4888 scope.go:117] "RemoveContainer" containerID="2df7a1bb23460b3296677c8de731421c3f3ab7fd0970a63e26450d729add3c3c" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.364964 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ca688e57-9165-4c51-8de7-366eebeb8596\") " pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.371127 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.394161 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-scripts\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.394207 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.394246 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.394295 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.394320 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62cnk\" (UniqueName: \"kubernetes.io/projected/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-kube-api-access-62cnk\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.394376 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.394408 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-logs\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.497583 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.499335 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-logs\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.499416 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-scripts\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.499438 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.499471 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.499517 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.499543 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62cnk\" (UniqueName: \"kubernetes.io/projected/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-kube-api-access-62cnk\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.499594 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.500367 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.500870 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-logs\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.512275 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.515899 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.516328 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.517608 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-scripts\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.520698 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data-custom\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.533843 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62cnk\" (UniqueName: \"kubernetes.io/projected/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-kube-api-access-62cnk\") pod \"cinder-api-0\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.591238 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.640122 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.705387 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-logs\") pod \"78d84d66-314f-4338-ab86-206b0db9f5b9\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.705538 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-httpd-run\") pod \"78d84d66-314f-4338-ab86-206b0db9f5b9\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.705592 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-combined-ca-bundle\") pod \"78d84d66-314f-4338-ab86-206b0db9f5b9\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.705685 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-internal-tls-certs\") pod \"78d84d66-314f-4338-ab86-206b0db9f5b9\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.705749 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj8jh\" (UniqueName: \"kubernetes.io/projected/78d84d66-314f-4338-ab86-206b0db9f5b9-kube-api-access-fj8jh\") pod \"78d84d66-314f-4338-ab86-206b0db9f5b9\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.705790 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-config-data\") pod \"78d84d66-314f-4338-ab86-206b0db9f5b9\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.705868 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-scripts\") pod \"78d84d66-314f-4338-ab86-206b0db9f5b9\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.705941 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"78d84d66-314f-4338-ab86-206b0db9f5b9\" (UID: \"78d84d66-314f-4338-ab86-206b0db9f5b9\") " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.706769 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-logs" (OuterVolumeSpecName: "logs") pod "78d84d66-314f-4338-ab86-206b0db9f5b9" (UID: "78d84d66-314f-4338-ab86-206b0db9f5b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.710327 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "78d84d66-314f-4338-ab86-206b0db9f5b9" (UID: "78d84d66-314f-4338-ab86-206b0db9f5b9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.726303 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d84d66-314f-4338-ab86-206b0db9f5b9-kube-api-access-fj8jh" (OuterVolumeSpecName: "kube-api-access-fj8jh") pod "78d84d66-314f-4338-ab86-206b0db9f5b9" (UID: "78d84d66-314f-4338-ab86-206b0db9f5b9"). InnerVolumeSpecName "kube-api-access-fj8jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.726778 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-scripts" (OuterVolumeSpecName: "scripts") pod "78d84d66-314f-4338-ab86-206b0db9f5b9" (UID: "78d84d66-314f-4338-ab86-206b0db9f5b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.755671 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "78d84d66-314f-4338-ab86-206b0db9f5b9" (UID: "78d84d66-314f-4338-ab86-206b0db9f5b9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.778909 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78d84d66-314f-4338-ab86-206b0db9f5b9" (UID: "78d84d66-314f-4338-ab86-206b0db9f5b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.779053 4888 scope.go:117] "RemoveContainer" containerID="8f23fa36485dcf01f41482fad1ff980fdbf482b390b2ff18760b42fafd1ee635" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.811521 4888 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.811555 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.811575 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj8jh\" (UniqueName: \"kubernetes.io/projected/78d84d66-314f-4338-ab86-206b0db9f5b9-kube-api-access-fj8jh\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.811590 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.811618 4888 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.811629 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78d84d66-314f-4338-ab86-206b0db9f5b9-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.880658 4888 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.883562 4888 scope.go:117] "RemoveContainer" containerID="ca40c156cdbef7ea93804d2c29173089b45f33f039e87e8028f32b33b004ac5e" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.912868 4888 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.942911 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "78d84d66-314f-4338-ab86-206b0db9f5b9" (UID: "78d84d66-314f-4338-ab86-206b0db9f5b9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:11 crc kubenswrapper[4888]: I1006 15:19:11.970325 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-config-data" (OuterVolumeSpecName: "config-data") pod "78d84d66-314f-4338-ab86-206b0db9f5b9" (UID: "78d84d66-314f-4338-ab86-206b0db9f5b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.014601 4888 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.014641 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78d84d66-314f-4338-ab86-206b0db9f5b9-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.356061 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.371728 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.404491 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-r4vsc"] Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.424711 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mgsq\" (UniqueName: \"kubernetes.io/projected/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-kube-api-access-4mgsq\") pod \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.424825 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-ovndb-tls-certs\") pod \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.424865 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-config\") pod \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.424893 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-combined-ca-bundle\") pod \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.424995 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-httpd-config\") pod \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\" (UID: \"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606\") " Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.454595 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" (UID: "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.463091 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-kube-api-access-4mgsq" (OuterVolumeSpecName: "kube-api-access-4mgsq") pod "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" (UID: "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606"). InnerVolumeSpecName "kube-api-access-4mgsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.499636 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" event={"ID":"e50b443c-75ad-4c36-871e-6d486f99d547","Type":"ContainerStarted","Data":"63d9da943a196643cca0b491b0f6c825ec35ce0d3276fb5c7d7f4153dbdf0440"} Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.526743 4888 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.526772 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mgsq\" (UniqueName: \"kubernetes.io/projected/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-kube-api-access-4mgsq\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.526956 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78d84d66-314f-4338-ab86-206b0db9f5b9","Type":"ContainerDied","Data":"d178010b587b91344ea46d407a450b8acce5d39b088aa61ca49793985a544144"} Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.526991 4888 scope.go:117] "RemoveContainer" containerID="da15d24284cb0c8bc8f729e022d0e8ba2eb02f6fb6cfd2c76a663fad85f1d379" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.527080 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.564366 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.605051 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-config" (OuterVolumeSpecName: "config") pod "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" (UID: "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.605171 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" (UID: "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.612265 4888 generic.go:334] "Generic (PLEG): container finished" podID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerID="985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe" exitCode=0 Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.612370 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6896cb7668-gxlzd" event={"ID":"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606","Type":"ContainerDied","Data":"985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe"} Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.612402 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6896cb7668-gxlzd" event={"ID":"6d0d5bd2-7e67-4a0d-b32c-ac23389ca606","Type":"ContainerDied","Data":"142f00b70423cf748c45236953e287f36a881de53da375bf304bf206d3a5b7ab"} Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.614997 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6896cb7668-gxlzd" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.629030 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.629182 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.630730 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.632548 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerStarted","Data":"a99b9ff2076651f8eb2922ced58b3472106794966ab38fc90f8096990eaab535"} Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.644538 4888 scope.go:117] "RemoveContainer" containerID="a65899bb3278b329aebbafd8e058151b8de4c4f5b2a405841f698cbcd9a6c1b2" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.651275 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.663997 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.679790 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.696645 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:19:12 crc kubenswrapper[4888]: E1006 15:19:12.697579 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerName="neutron-httpd" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.697598 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerName="neutron-httpd" Oct 06 15:19:12 crc kubenswrapper[4888]: E1006 15:19:12.697613 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerName="glance-log" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.697687 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerName="glance-log" Oct 06 15:19:12 crc kubenswrapper[4888]: E1006 15:19:12.697702 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerName="neutron-api" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.697707 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerName="neutron-api" Oct 06 15:19:12 crc kubenswrapper[4888]: E1006 15:19:12.697731 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerName="glance-httpd" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.697736 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerName="glance-httpd" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.697930 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerName="neutron-api" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.697946 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerName="glance-log" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.697956 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" containerName="glance-httpd" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.697965 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" containerName="neutron-httpd" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.698805 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.701435 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.701760 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.713173 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" (UID: "6d0d5bd2-7e67-4a0d-b32c-ac23389ca606"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.723009 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.731300 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.731360 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.731409 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l4ps\" (UniqueName: \"kubernetes.io/projected/f820ff6f-d5ea-4422-991f-a982fb9b563d-kube-api-access-4l4ps\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.731439 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.731459 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f820ff6f-d5ea-4422-991f-a982fb9b563d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.731485 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f820ff6f-d5ea-4422-991f-a982fb9b563d-logs\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.731512 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.731536 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.749839 4888 scope.go:117] "RemoveContainer" containerID="54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.756738 4888 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.857900 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.857969 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.858016 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l4ps\" (UniqueName: \"kubernetes.io/projected/f820ff6f-d5ea-4422-991f-a982fb9b563d-kube-api-access-4l4ps\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.858046 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.858067 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f820ff6f-d5ea-4422-991f-a982fb9b563d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.858093 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f820ff6f-d5ea-4422-991f-a982fb9b563d-logs\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.858117 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.858142 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.859556 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f820ff6f-d5ea-4422-991f-a982fb9b563d-logs\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.860059 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f820ff6f-d5ea-4422-991f-a982fb9b563d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.860478 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.868483 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.869351 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.870406 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.871507 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f820ff6f-d5ea-4422-991f-a982fb9b563d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.908424 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.910003 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l4ps\" (UniqueName: \"kubernetes.io/projected/f820ff6f-d5ea-4422-991f-a982fb9b563d-kube-api-access-4l4ps\") pod \"glance-default-internal-api-0\" (UID: \"f820ff6f-d5ea-4422-991f-a982fb9b563d\") " pod="openstack/glance-default-internal-api-0" Oct 06 15:19:12 crc kubenswrapper[4888]: I1006 15:19:12.919666 4888 scope.go:117] "RemoveContainer" containerID="985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe" Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.086459 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.112613 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d84d66-314f-4338-ab86-206b0db9f5b9" path="/var/lib/kubelet/pods/78d84d66-314f-4338-ab86-206b0db9f5b9/volumes" Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.132890 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6896cb7668-gxlzd"] Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.132950 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6896cb7668-gxlzd"] Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.142976 4888 scope.go:117] "RemoveContainer" containerID="54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23" Oct 06 15:19:13 crc kubenswrapper[4888]: E1006 15:19:13.150003 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23\": container with ID starting with 54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23 not found: ID does not exist" containerID="54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23" Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.150077 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23"} err="failed to get container status \"54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23\": rpc error: code = NotFound desc = could not find container \"54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23\": container with ID starting with 54bdd68a93c9cf26e06cc459b3a8e9e59d259358a447c0cd6e3c5fa11ad93f23 not found: ID does not exist" Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.150123 4888 scope.go:117] "RemoveContainer" containerID="985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe" Oct 06 15:19:13 crc kubenswrapper[4888]: E1006 15:19:13.154468 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe\": container with ID starting with 985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe not found: ID does not exist" containerID="985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe" Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.154502 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe"} err="failed to get container status \"985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe\": rpc error: code = NotFound desc = could not find container \"985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe\": container with ID starting with 985f628d8cf4d0806fa005241d2fc5279414cd479a12d1ba5043289214f18dbe not found: ID does not exist" Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.672379 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f","Type":"ContainerStarted","Data":"878e441dc1b775673e5dcf5b9453b58157ab9effa60006ba32ef809e1d9a74c4"} Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.705105 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ca688e57-9165-4c51-8de7-366eebeb8596","Type":"ContainerStarted","Data":"1bab8f1c548efe6764d5c3e885c49360c2a5afdf8de1d346b34402d5c5223be4"} Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.717267 4888 generic.go:334] "Generic (PLEG): container finished" podID="e50b443c-75ad-4c36-871e-6d486f99d547" containerID="3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66" exitCode=0 Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.717331 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" event={"ID":"e50b443c-75ad-4c36-871e-6d486f99d547","Type":"ContainerDied","Data":"3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66"} Oct 06 15:19:13 crc kubenswrapper[4888]: I1006 15:19:13.727597 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4bc16433-b83f-4a33-957a-3587ee2f7893","Type":"ContainerStarted","Data":"d472a8e921a297f33304da36f6c3503461dd380e13790083ff9b59e96406bbd9"} Oct 06 15:19:14 crc kubenswrapper[4888]: I1006 15:19:14.002474 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 15:19:14 crc kubenswrapper[4888]: W1006 15:19:14.067070 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf820ff6f_d5ea_4422_991f_a982fb9b563d.slice/crio-fd0337c995c0629633612fec6a485e716bba70c89b12d8d2532c7d3ff815e15a WatchSource:0}: Error finding container fd0337c995c0629633612fec6a485e716bba70c89b12d8d2532c7d3ff815e15a: Status 404 returned error can't find the container with id fd0337c995c0629633612fec6a485e716bba70c89b12d8d2532c7d3ff815e15a Oct 06 15:19:14 crc kubenswrapper[4888]: I1006 15:19:14.246347 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:14 crc kubenswrapper[4888]: I1006 15:19:14.779707 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerStarted","Data":"678ba802d3934ffbd8762d8a326e4f939babfad904926ee8060888d209b1aeca"} Oct 06 15:19:14 crc kubenswrapper[4888]: I1006 15:19:14.795336 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" event={"ID":"e50b443c-75ad-4c36-871e-6d486f99d547","Type":"ContainerStarted","Data":"942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862"} Oct 06 15:19:14 crc kubenswrapper[4888]: I1006 15:19:14.796181 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:14 crc kubenswrapper[4888]: I1006 15:19:14.804366 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f820ff6f-d5ea-4422-991f-a982fb9b563d","Type":"ContainerStarted","Data":"fd0337c995c0629633612fec6a485e716bba70c89b12d8d2532c7d3ff815e15a"} Oct 06 15:19:14 crc kubenswrapper[4888]: I1006 15:19:14.822551 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" podStartSLOduration=4.822531845 podStartE2EDuration="4.822531845s" podCreationTimestamp="2025-10-06 15:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:19:14.814486453 +0000 UTC m=+1094.626837201" watchObservedRunningTime="2025-10-06 15:19:14.822531845 +0000 UTC m=+1094.634882563" Oct 06 15:19:14 crc kubenswrapper[4888]: I1006 15:19:14.952662 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0d5bd2-7e67-4a0d-b32c-ac23389ca606" path="/var/lib/kubelet/pods/6d0d5bd2-7e67-4a0d-b32c-ac23389ca606/volumes" Oct 06 15:19:15 crc kubenswrapper[4888]: I1006 15:19:15.868379 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f","Type":"ContainerStarted","Data":"2b4dc6740a016ca6ce1c540a19fd842330a41e0c5c474f23f3081b55582b268f"} Oct 06 15:19:15 crc kubenswrapper[4888]: I1006 15:19:15.896925 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f820ff6f-d5ea-4422-991f-a982fb9b563d","Type":"ContainerStarted","Data":"2c4256d1eaca46f2e3079c0de31ca89fca6f7f9416af7ddef192effe3cf98d7b"} Oct 06 15:19:15 crc kubenswrapper[4888]: I1006 15:19:15.907838 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerStarted","Data":"f7831d37838dab22a58d3ab1834c86ec2d570c0180887d51f8db9a473c46fa52"} Oct 06 15:19:15 crc kubenswrapper[4888]: I1006 15:19:15.920356 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ca688e57-9165-4c51-8de7-366eebeb8596","Type":"ContainerStarted","Data":"7418d516c98a7c9ef910e98c3c7b07e11a1843c538ff0f4de8a5711a3d8795bb"} Oct 06 15:19:15 crc kubenswrapper[4888]: I1006 15:19:15.925394 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4bc16433-b83f-4a33-957a-3587ee2f7893","Type":"ContainerStarted","Data":"52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09"} Oct 06 15:19:16 crc kubenswrapper[4888]: I1006 15:19:16.936487 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f820ff6f-d5ea-4422-991f-a982fb9b563d","Type":"ContainerStarted","Data":"a7477045b77ce65fc0db5afa7904f04ada934dd6546fded2f645be3a0bafc0d5"} Oct 06 15:19:16 crc kubenswrapper[4888]: I1006 15:19:16.940954 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerStarted","Data":"2aaf66a8f0165401f07814c366b8c3393cdd1f72a896515b44bc6a25e89f5991"} Oct 06 15:19:16 crc kubenswrapper[4888]: I1006 15:19:16.944197 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ca688e57-9165-4c51-8de7-366eebeb8596","Type":"ContainerStarted","Data":"8259b3f7a8038d79120613f9a42b853acc39040e8a9285326d055685ac53c4ee"} Oct 06 15:19:16 crc kubenswrapper[4888]: I1006 15:19:16.949257 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4bc16433-b83f-4a33-957a-3587ee2f7893","Type":"ContainerStarted","Data":"87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846"} Oct 06 15:19:16 crc kubenswrapper[4888]: I1006 15:19:16.954843 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f","Type":"ContainerStarted","Data":"02c0ec3cbfe04095deca4ddfea209723f8faca551206dd967729bf4646a5e311"} Oct 06 15:19:16 crc kubenswrapper[4888]: I1006 15:19:16.955983 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerName="cinder-api-log" containerID="cri-o://2b4dc6740a016ca6ce1c540a19fd842330a41e0c5c474f23f3081b55582b268f" gracePeriod=30 Oct 06 15:19:16 crc kubenswrapper[4888]: I1006 15:19:16.956310 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 06 15:19:16 crc kubenswrapper[4888]: I1006 15:19:16.956371 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerName="cinder-api" containerID="cri-o://02c0ec3cbfe04095deca4ddfea209723f8faca551206dd967729bf4646a5e311" gracePeriod=30 Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.017704 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.017684552 podStartE2EDuration="5.017684552s" podCreationTimestamp="2025-10-06 15:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:19:16.965391825 +0000 UTC m=+1096.777742543" watchObservedRunningTime="2025-10-06 15:19:17.017684552 +0000 UTC m=+1096.830035270" Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.024762 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.764812372 podStartE2EDuration="7.024740034s" podCreationTimestamp="2025-10-06 15:19:10 +0000 UTC" firstStartedPulling="2025-10-06 15:19:12.669736133 +0000 UTC m=+1092.482086851" lastFinishedPulling="2025-10-06 15:19:13.929663795 +0000 UTC m=+1093.742014513" observedRunningTime="2025-10-06 15:19:17.002839374 +0000 UTC m=+1096.815190092" watchObservedRunningTime="2025-10-06 15:19:17.024740034 +0000 UTC m=+1096.837090752" Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.069875 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.069855325 podStartE2EDuration="7.069855325s" podCreationTimestamp="2025-10-06 15:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:19:17.024560428 +0000 UTC m=+1096.836911166" watchObservedRunningTime="2025-10-06 15:19:17.069855325 +0000 UTC m=+1096.882206033" Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.077290 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.077267628 podStartE2EDuration="6.077267628s" podCreationTimestamp="2025-10-06 15:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:19:17.057665611 +0000 UTC m=+1096.870016329" watchObservedRunningTime="2025-10-06 15:19:17.077267628 +0000 UTC m=+1096.889618346" Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.986413 4888 generic.go:334] "Generic (PLEG): container finished" podID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerID="02c0ec3cbfe04095deca4ddfea209723f8faca551206dd967729bf4646a5e311" exitCode=0 Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.987839 4888 generic.go:334] "Generic (PLEG): container finished" podID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerID="2b4dc6740a016ca6ce1c540a19fd842330a41e0c5c474f23f3081b55582b268f" exitCode=143 Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.990579 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f","Type":"ContainerDied","Data":"02c0ec3cbfe04095deca4ddfea209723f8faca551206dd967729bf4646a5e311"} Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.990730 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f","Type":"ContainerDied","Data":"2b4dc6740a016ca6ce1c540a19fd842330a41e0c5c474f23f3081b55582b268f"} Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.990846 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f","Type":"ContainerDied","Data":"878e441dc1b775673e5dcf5b9453b58157ab9effa60006ba32ef809e1d9a74c4"} Oct 06 15:19:17 crc kubenswrapper[4888]: I1006 15:19:17.990945 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878e441dc1b775673e5dcf5b9453b58157ab9effa60006ba32ef809e1d9a74c4" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.006465 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.125114 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-combined-ca-bundle\") pod \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.125193 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data-custom\") pod \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.125267 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62cnk\" (UniqueName: \"kubernetes.io/projected/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-kube-api-access-62cnk\") pod \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.125311 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-logs\") pod \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.125335 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-scripts\") pod \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.125401 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-etc-machine-id\") pod \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.125488 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data\") pod \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\" (UID: \"6c1b6eda-d914-4b6d-b686-c61bcc4bb30f\") " Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.129855 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" (UID: "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.132055 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-logs" (OuterVolumeSpecName: "logs") pod "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" (UID: "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.136977 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-scripts" (OuterVolumeSpecName: "scripts") pod "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" (UID: "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.138193 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-kube-api-access-62cnk" (OuterVolumeSpecName: "kube-api-access-62cnk") pod "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" (UID: "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f"). InnerVolumeSpecName "kube-api-access-62cnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.140864 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" (UID: "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.217193 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" (UID: "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.228129 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62cnk\" (UniqueName: \"kubernetes.io/projected/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-kube-api-access-62cnk\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.228168 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.228183 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.228196 4888 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.228212 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.228225 4888 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.244059 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data" (OuterVolumeSpecName: "config-data") pod "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" (UID: "6c1b6eda-d914-4b6d-b686-c61bcc4bb30f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.329708 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.998419 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.999624 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerStarted","Data":"6effa450121c8b5ff2d6a0f7c580736c857450ebff223f1c99025b0a4db2a607"} Oct 06 15:19:18 crc kubenswrapper[4888]: I1006 15:19:18.999687 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.031458 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.875495179 podStartE2EDuration="9.031435555s" podCreationTimestamp="2025-10-06 15:19:10 +0000 UTC" firstStartedPulling="2025-10-06 15:19:12.387601058 +0000 UTC m=+1092.199951776" lastFinishedPulling="2025-10-06 15:19:17.543541434 +0000 UTC m=+1097.355892152" observedRunningTime="2025-10-06 15:19:19.024643151 +0000 UTC m=+1098.836993869" watchObservedRunningTime="2025-10-06 15:19:19.031435555 +0000 UTC m=+1098.843786273" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.057749 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.070826 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.082014 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:19 crc kubenswrapper[4888]: E1006 15:19:19.082492 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerName="cinder-api" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.082514 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerName="cinder-api" Oct 06 15:19:19 crc kubenswrapper[4888]: E1006 15:19:19.082552 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerName="cinder-api-log" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.082558 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerName="cinder-api-log" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.082748 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerName="cinder-api-log" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.082773 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" containerName="cinder-api" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.083691 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.089480 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.089688 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.089843 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.090910 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.147825 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cfsl\" (UniqueName: \"kubernetes.io/projected/c5c59923-504c-4e97-bcde-a0b2af5adab1-kube-api-access-9cfsl\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.147883 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.147926 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.147941 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.147975 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-config-data\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.147993 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5c59923-504c-4e97-bcde-a0b2af5adab1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.148046 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.148079 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c59923-504c-4e97-bcde-a0b2af5adab1-logs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.148093 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-scripts\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250177 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c59923-504c-4e97-bcde-a0b2af5adab1-logs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250235 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-scripts\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250296 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cfsl\" (UniqueName: \"kubernetes.io/projected/c5c59923-504c-4e97-bcde-a0b2af5adab1-kube-api-access-9cfsl\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250339 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250410 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250433 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250477 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-config-data\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250499 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5c59923-504c-4e97-bcde-a0b2af5adab1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.250570 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.251418 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5c59923-504c-4e97-bcde-a0b2af5adab1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.252102 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c59923-504c-4e97-bcde-a0b2af5adab1-logs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.267889 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-config-data\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.268356 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.270832 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.272335 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-scripts\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.275037 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.285512 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5c59923-504c-4e97-bcde-a0b2af5adab1-config-data-custom\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.286700 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cfsl\" (UniqueName: \"kubernetes.io/projected/c5c59923-504c-4e97-bcde-a0b2af5adab1-kube-api-access-9cfsl\") pod \"cinder-api-0\" (UID: \"c5c59923-504c-4e97-bcde-a0b2af5adab1\") " pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.404463 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.522438 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:19 crc kubenswrapper[4888]: I1006 15:19:19.984088 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 15:19:20 crc kubenswrapper[4888]: I1006 15:19:20.012259 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c59923-504c-4e97-bcde-a0b2af5adab1","Type":"ContainerStarted","Data":"edbe7e2e9e4a50fd6d4a440391fe1361d21cb65e2d04465a6b059368665e5df9"} Oct 06 15:19:20 crc kubenswrapper[4888]: I1006 15:19:20.969276 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c1b6eda-d914-4b6d-b686-c61bcc4bb30f" path="/var/lib/kubelet/pods/6c1b6eda-d914-4b6d-b686-c61bcc4bb30f/volumes" Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.043010 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="ceilometer-central-agent" containerID="cri-o://678ba802d3934ffbd8762d8a326e4f939babfad904926ee8060888d209b1aeca" gracePeriod=30 Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.043198 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c59923-504c-4e97-bcde-a0b2af5adab1","Type":"ContainerStarted","Data":"e11fb1f32e42f362829f11db8be6fe23c4392c30adefbaa697e935d5e41fde5a"} Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.043753 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="proxy-httpd" containerID="cri-o://6effa450121c8b5ff2d6a0f7c580736c857450ebff223f1c99025b0a4db2a607" gracePeriod=30 Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.043893 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="sg-core" containerID="cri-o://2aaf66a8f0165401f07814c366b8c3393cdd1f72a896515b44bc6a25e89f5991" gracePeriod=30 Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.044860 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="ceilometer-notification-agent" containerID="cri-o://f7831d37838dab22a58d3ab1834c86ec2d570c0180887d51f8db9a473c46fa52" gracePeriod=30 Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.375963 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.465194 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-4dcbt"] Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.465439 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" podUID="eb24b00b-412e-45ef-a960-5e291746d95e" containerName="dnsmasq-dns" containerID="cri-o://f43b2940985f5a0603fb7a9721abb119ac1784338eab46672c81d79516e96aec" gracePeriod=10 Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.498867 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.515006 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.515175 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.602046 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 15:19:21 crc kubenswrapper[4888]: I1006 15:19:21.604443 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.020370 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.117870 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c5c59923-504c-4e97-bcde-a0b2af5adab1","Type":"ContainerStarted","Data":"d3c8661eb2532beca9b58c28c9ab33344063ccb895c1c4e3a2e78188573ce1ad"} Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.119560 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.123904 4888 generic.go:334] "Generic (PLEG): container finished" podID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerID="6effa450121c8b5ff2d6a0f7c580736c857450ebff223f1c99025b0a4db2a607" exitCode=0 Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.123946 4888 generic.go:334] "Generic (PLEG): container finished" podID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerID="2aaf66a8f0165401f07814c366b8c3393cdd1f72a896515b44bc6a25e89f5991" exitCode=2 Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.123958 4888 generic.go:334] "Generic (PLEG): container finished" podID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerID="f7831d37838dab22a58d3ab1834c86ec2d570c0180887d51f8db9a473c46fa52" exitCode=0 Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.124064 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerDied","Data":"6effa450121c8b5ff2d6a0f7c580736c857450ebff223f1c99025b0a4db2a607"} Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.124106 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerDied","Data":"2aaf66a8f0165401f07814c366b8c3393cdd1f72a896515b44bc6a25e89f5991"} Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.124120 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerDied","Data":"f7831d37838dab22a58d3ab1834c86ec2d570c0180887d51f8db9a473c46fa52"} Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.152697 4888 generic.go:334] "Generic (PLEG): container finished" podID="eb24b00b-412e-45ef-a960-5e291746d95e" containerID="f43b2940985f5a0603fb7a9721abb119ac1784338eab46672c81d79516e96aec" exitCode=0 Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.154941 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" event={"ID":"eb24b00b-412e-45ef-a960-5e291746d95e","Type":"ContainerDied","Data":"f43b2940985f5a0603fb7a9721abb119ac1784338eab46672c81d79516e96aec"} Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.156396 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.157209 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.174545 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.174530107 podStartE2EDuration="3.174530107s" podCreationTimestamp="2025-10-06 15:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:19:22.16417752 +0000 UTC m=+1101.976528238" watchObservedRunningTime="2025-10-06 15:19:22.174530107 +0000 UTC m=+1101.986880825" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.265571 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.284655 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.447308 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-nb\") pod \"eb24b00b-412e-45ef-a960-5e291746d95e\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.447789 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzqtw\" (UniqueName: \"kubernetes.io/projected/eb24b00b-412e-45ef-a960-5e291746d95e-kube-api-access-wzqtw\") pod \"eb24b00b-412e-45ef-a960-5e291746d95e\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.447923 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-swift-storage-0\") pod \"eb24b00b-412e-45ef-a960-5e291746d95e\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.448375 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-config\") pod \"eb24b00b-412e-45ef-a960-5e291746d95e\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.448484 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-sb\") pod \"eb24b00b-412e-45ef-a960-5e291746d95e\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.448555 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-svc\") pod \"eb24b00b-412e-45ef-a960-5e291746d95e\" (UID: \"eb24b00b-412e-45ef-a960-5e291746d95e\") " Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.467878 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb24b00b-412e-45ef-a960-5e291746d95e-kube-api-access-wzqtw" (OuterVolumeSpecName: "kube-api-access-wzqtw") pod "eb24b00b-412e-45ef-a960-5e291746d95e" (UID: "eb24b00b-412e-45ef-a960-5e291746d95e"). InnerVolumeSpecName "kube-api-access-wzqtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.547470 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb24b00b-412e-45ef-a960-5e291746d95e" (UID: "eb24b00b-412e-45ef-a960-5e291746d95e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.547697 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb24b00b-412e-45ef-a960-5e291746d95e" (UID: "eb24b00b-412e-45ef-a960-5e291746d95e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.552617 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzqtw\" (UniqueName: \"kubernetes.io/projected/eb24b00b-412e-45ef-a960-5e291746d95e-kube-api-access-wzqtw\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.552651 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.552661 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.553759 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-config" (OuterVolumeSpecName: "config") pod "eb24b00b-412e-45ef-a960-5e291746d95e" (UID: "eb24b00b-412e-45ef-a960-5e291746d95e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.579298 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb24b00b-412e-45ef-a960-5e291746d95e" (UID: "eb24b00b-412e-45ef-a960-5e291746d95e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.587087 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb24b00b-412e-45ef-a960-5e291746d95e" (UID: "eb24b00b-412e-45ef-a960-5e291746d95e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.655119 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.655160 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:22 crc kubenswrapper[4888]: I1006 15:19:22.655172 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb24b00b-412e-45ef-a960-5e291746d95e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.094512 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.094828 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.295234 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.296090 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" event={"ID":"eb24b00b-412e-45ef-a960-5e291746d95e","Type":"ContainerDied","Data":"88a7e4235c7bb38235b997e5f5fbb0cf595433b7e07bdce918e6cc0430f74b95"} Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.296127 4888 scope.go:117] "RemoveContainer" containerID="f43b2940985f5a0603fb7a9721abb119ac1784338eab46672c81d79516e96aec" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.350880 4888 generic.go:334] "Generic (PLEG): container finished" podID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerID="678ba802d3934ffbd8762d8a326e4f939babfad904926ee8060888d209b1aeca" exitCode=0 Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.351115 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerName="cinder-scheduler" containerID="cri-o://52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09" gracePeriod=30 Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.351202 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerDied","Data":"678ba802d3934ffbd8762d8a326e4f939babfad904926ee8060888d209b1aeca"} Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.352288 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerName="probe" containerID="cri-o://87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846" gracePeriod=30 Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.433036 4888 scope.go:117] "RemoveContainer" containerID="d20de1b2420c7c7d60ead648c5b48300b2de981eeb1dc339bc967d63804db48d" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.434869 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.458292 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-4dcbt"] Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.459727 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.460630 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.478393 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-4dcbt"] Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.482467 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-config-data\") pod \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.482524 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-run-httpd\") pod \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.482580 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-combined-ca-bundle\") pod \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.482664 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-scripts\") pod \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.482709 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-log-httpd\") pod \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.482741 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-sg-core-conf-yaml\") pod \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.482922 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zt74\" (UniqueName: \"kubernetes.io/projected/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-kube-api-access-8zt74\") pod \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\" (UID: \"9aae738b-f280-4150-b5ff-5cf0e7abf2e3\") " Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.485938 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9aae738b-f280-4150-b5ff-5cf0e7abf2e3" (UID: "9aae738b-f280-4150-b5ff-5cf0e7abf2e3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.490434 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9aae738b-f280-4150-b5ff-5cf0e7abf2e3" (UID: "9aae738b-f280-4150-b5ff-5cf0e7abf2e3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.504826 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-scripts" (OuterVolumeSpecName: "scripts") pod "9aae738b-f280-4150-b5ff-5cf0e7abf2e3" (UID: "9aae738b-f280-4150-b5ff-5cf0e7abf2e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.518001 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-kube-api-access-8zt74" (OuterVolumeSpecName: "kube-api-access-8zt74") pod "9aae738b-f280-4150-b5ff-5cf0e7abf2e3" (UID: "9aae738b-f280-4150-b5ff-5cf0e7abf2e3"). InnerVolumeSpecName "kube-api-access-8zt74". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.585671 4888 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.585710 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.585722 4888 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.585732 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zt74\" (UniqueName: \"kubernetes.io/projected/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-kube-api-access-8zt74\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.598964 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9aae738b-f280-4150-b5ff-5cf0e7abf2e3" (UID: "9aae738b-f280-4150-b5ff-5cf0e7abf2e3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.599122 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.659916 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-config-data" (OuterVolumeSpecName: "config-data") pod "9aae738b-f280-4150-b5ff-5cf0e7abf2e3" (UID: "9aae738b-f280-4150-b5ff-5cf0e7abf2e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.664891 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9aae738b-f280-4150-b5ff-5cf0e7abf2e3" (UID: "9aae738b-f280-4150-b5ff-5cf0e7abf2e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.687060 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.687104 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:23 crc kubenswrapper[4888]: I1006 15:19:23.687120 4888 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9aae738b-f280-4150-b5ff-5cf0e7abf2e3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.362855 4888 generic.go:334] "Generic (PLEG): container finished" podID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerID="87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846" exitCode=0 Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.363182 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4bc16433-b83f-4a33-957a-3587ee2f7893","Type":"ContainerDied","Data":"87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846"} Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.368931 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9aae738b-f280-4150-b5ff-5cf0e7abf2e3","Type":"ContainerDied","Data":"a99b9ff2076651f8eb2922ced58b3472106794966ab38fc90f8096990eaab535"} Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.369005 4888 scope.go:117] "RemoveContainer" containerID="6effa450121c8b5ff2d6a0f7c580736c857450ebff223f1c99025b0a4db2a607" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.369303 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.370510 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.370535 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.371250 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.474295 4888 scope.go:117] "RemoveContainer" containerID="2aaf66a8f0165401f07814c366b8c3393cdd1f72a896515b44bc6a25e89f5991" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.526202 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.538637 4888 scope.go:117] "RemoveContainer" containerID="f7831d37838dab22a58d3ab1834c86ec2d570c0180887d51f8db9a473c46fa52" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.540721 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.570586 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:24 crc kubenswrapper[4888]: E1006 15:19:24.571313 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb24b00b-412e-45ef-a960-5e291746d95e" containerName="init" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.571444 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb24b00b-412e-45ef-a960-5e291746d95e" containerName="init" Oct 06 15:19:24 crc kubenswrapper[4888]: E1006 15:19:24.571535 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb24b00b-412e-45ef-a960-5e291746d95e" containerName="dnsmasq-dns" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.571604 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb24b00b-412e-45ef-a960-5e291746d95e" containerName="dnsmasq-dns" Oct 06 15:19:24 crc kubenswrapper[4888]: E1006 15:19:24.571682 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="ceilometer-central-agent" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.571749 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="ceilometer-central-agent" Oct 06 15:19:24 crc kubenswrapper[4888]: E1006 15:19:24.571835 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="proxy-httpd" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.572044 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="proxy-httpd" Oct 06 15:19:24 crc kubenswrapper[4888]: E1006 15:19:24.572134 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="ceilometer-notification-agent" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.572203 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="ceilometer-notification-agent" Oct 06 15:19:24 crc kubenswrapper[4888]: E1006 15:19:24.572276 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="sg-core" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.572327 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="sg-core" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.572570 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="ceilometer-central-agent" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.572631 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="ceilometer-notification-agent" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.572695 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="sg-core" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.572770 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb24b00b-412e-45ef-a960-5e291746d95e" containerName="dnsmasq-dns" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.572865 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" containerName="proxy-httpd" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.574681 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.574943 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.579624 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.590973 4888 scope.go:117] "RemoveContainer" containerID="678ba802d3934ffbd8762d8a326e4f939babfad904926ee8060888d209b1aeca" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.601607 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.719409 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.719746 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-scripts\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.719854 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnjbk\" (UniqueName: \"kubernetes.io/projected/56e2006b-e85c-4165-84c8-e85f90c38907-kube-api-access-dnjbk\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.719936 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-log-httpd\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.720073 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-run-httpd\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.720121 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-config-data\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.720143 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.821868 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-run-httpd\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.822141 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-config-data\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.822296 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.822518 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.822667 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-scripts\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.822853 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnjbk\" (UniqueName: \"kubernetes.io/projected/56e2006b-e85c-4165-84c8-e85f90c38907-kube-api-access-dnjbk\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.822985 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-log-httpd\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.822524 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-run-httpd\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.823654 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-log-httpd\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.830343 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-scripts\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.831696 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.835985 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.836269 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-config-data\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.870520 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnjbk\" (UniqueName: \"kubernetes.io/projected/56e2006b-e85c-4165-84c8-e85f90c38907-kube-api-access-dnjbk\") pod \"ceilometer-0\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.934248 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aae738b-f280-4150-b5ff-5cf0e7abf2e3" path="/var/lib/kubelet/pods/9aae738b-f280-4150-b5ff-5cf0e7abf2e3/volumes" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.935271 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:24 crc kubenswrapper[4888]: I1006 15:19:24.935474 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb24b00b-412e-45ef-a960-5e291746d95e" path="/var/lib/kubelet/pods/eb24b00b-412e-45ef-a960-5e291746d95e/volumes" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.370030 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.382389 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"655523f3-6f3b-4675-8b5a-4c0451a185ca","Type":"ContainerStarted","Data":"48220af85c80f6369a4ed3fa992fe621506584f1ae988fa4f5d8127b77cdaef9"} Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.388300 4888 generic.go:334] "Generic (PLEG): container finished" podID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerID="52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09" exitCode=0 Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.388424 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.388450 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.388498 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4bc16433-b83f-4a33-957a-3587ee2f7893","Type":"ContainerDied","Data":"52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09"} Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.388534 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4bc16433-b83f-4a33-957a-3587ee2f7893","Type":"ContainerDied","Data":"d472a8e921a297f33304da36f6c3503461dd380e13790083ff9b59e96406bbd9"} Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.388558 4888 scope.go:117] "RemoveContainer" containerID="87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.429923 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.301281291 podStartE2EDuration="40.429905514s" podCreationTimestamp="2025-10-06 15:18:45 +0000 UTC" firstStartedPulling="2025-10-06 15:18:47.212295544 +0000 UTC m=+1067.024646262" lastFinishedPulling="2025-10-06 15:19:24.340919767 +0000 UTC m=+1104.153270485" observedRunningTime="2025-10-06 15:19:25.422115809 +0000 UTC m=+1105.234466547" watchObservedRunningTime="2025-10-06 15:19:25.429905514 +0000 UTC m=+1105.242256232" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.440676 4888 scope.go:117] "RemoveContainer" containerID="52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.488292 4888 scope.go:117] "RemoveContainer" containerID="87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846" Oct 06 15:19:25 crc kubenswrapper[4888]: E1006 15:19:25.488788 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846\": container with ID starting with 87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846 not found: ID does not exist" containerID="87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.490098 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846"} err="failed to get container status \"87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846\": rpc error: code = NotFound desc = could not find container \"87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846\": container with ID starting with 87260acc1816ec7bd0f80406eee2444f714b09a77ccdc609a08ee52ca41b2846 not found: ID does not exist" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.490135 4888 scope.go:117] "RemoveContainer" containerID="52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09" Oct 06 15:19:25 crc kubenswrapper[4888]: E1006 15:19:25.490562 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09\": container with ID starting with 52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09 not found: ID does not exist" containerID="52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.490588 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09"} err="failed to get container status \"52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09\": rpc error: code = NotFound desc = could not find container \"52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09\": container with ID starting with 52e29fa2133c2f300bb77d6197883a9da1034fb01d64c99f489f8b625542fc09 not found: ID does not exist" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.536619 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-combined-ca-bundle\") pod \"4bc16433-b83f-4a33-957a-3587ee2f7893\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.536718 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5z78\" (UniqueName: \"kubernetes.io/projected/4bc16433-b83f-4a33-957a-3587ee2f7893-kube-api-access-t5z78\") pod \"4bc16433-b83f-4a33-957a-3587ee2f7893\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.536829 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data-custom\") pod \"4bc16433-b83f-4a33-957a-3587ee2f7893\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.536866 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data\") pod \"4bc16433-b83f-4a33-957a-3587ee2f7893\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.536928 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-scripts\") pod \"4bc16433-b83f-4a33-957a-3587ee2f7893\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.536967 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc16433-b83f-4a33-957a-3587ee2f7893-etc-machine-id\") pod \"4bc16433-b83f-4a33-957a-3587ee2f7893\" (UID: \"4bc16433-b83f-4a33-957a-3587ee2f7893\") " Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.538964 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bc16433-b83f-4a33-957a-3587ee2f7893-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4bc16433-b83f-4a33-957a-3587ee2f7893" (UID: "4bc16433-b83f-4a33-957a-3587ee2f7893"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.550953 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4bc16433-b83f-4a33-957a-3587ee2f7893" (UID: "4bc16433-b83f-4a33-957a-3587ee2f7893"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.551987 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-scripts" (OuterVolumeSpecName: "scripts") pod "4bc16433-b83f-4a33-957a-3587ee2f7893" (UID: "4bc16433-b83f-4a33-957a-3587ee2f7893"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.554096 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc16433-b83f-4a33-957a-3587ee2f7893-kube-api-access-t5z78" (OuterVolumeSpecName: "kube-api-access-t5z78") pod "4bc16433-b83f-4a33-957a-3587ee2f7893" (UID: "4bc16433-b83f-4a33-957a-3587ee2f7893"). InnerVolumeSpecName "kube-api-access-t5z78". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.630925 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bc16433-b83f-4a33-957a-3587ee2f7893" (UID: "4bc16433-b83f-4a33-957a-3587ee2f7893"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.639579 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.639624 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5z78\" (UniqueName: \"kubernetes.io/projected/4bc16433-b83f-4a33-957a-3587ee2f7893-kube-api-access-t5z78\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.639696 4888 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.639709 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.639721 4888 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4bc16433-b83f-4a33-957a-3587ee2f7893-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.676409 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.708944 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data" (OuterVolumeSpecName: "config-data") pod "4bc16433-b83f-4a33-957a-3587ee2f7893" (UID: "4bc16433-b83f-4a33-957a-3587ee2f7893"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:25 crc kubenswrapper[4888]: I1006 15:19:25.741821 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bc16433-b83f-4a33-957a-3587ee2f7893-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.038314 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.048245 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.079237 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:26 crc kubenswrapper[4888]: E1006 15:19:26.079748 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerName="cinder-scheduler" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.079773 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerName="cinder-scheduler" Oct 06 15:19:26 crc kubenswrapper[4888]: E1006 15:19:26.079812 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerName="probe" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.079824 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerName="probe" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.080020 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerName="probe" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.080042 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" containerName="cinder-scheduler" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.085002 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.095652 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.106425 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.151533 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-config-data\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.151597 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.151629 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzkts\" (UniqueName: \"kubernetes.io/projected/7176912c-2b8c-46ac-b8e6-8067439a2720-kube-api-access-hzkts\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.151669 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.151745 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7176912c-2b8c-46ac-b8e6-8067439a2720-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.151768 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-scripts\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.253897 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-config-data\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.254242 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.254282 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzkts\" (UniqueName: \"kubernetes.io/projected/7176912c-2b8c-46ac-b8e6-8067439a2720-kube-api-access-hzkts\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.254333 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.254428 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7176912c-2b8c-46ac-b8e6-8067439a2720-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.254474 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-scripts\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.255018 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7176912c-2b8c-46ac-b8e6-8067439a2720-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.261491 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-config-data\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.264346 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.269690 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-scripts\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.278374 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7176912c-2b8c-46ac-b8e6-8067439a2720-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.289574 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzkts\" (UniqueName: \"kubernetes.io/projected/7176912c-2b8c-46ac-b8e6-8067439a2720-kube-api-access-hzkts\") pod \"cinder-scheduler-0\" (UID: \"7176912c-2b8c-46ac-b8e6-8067439a2720\") " pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.314892 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.406968 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerStarted","Data":"0b37b499e729af1c1fa8ee38488752451677bd97fef6054f4aaf8f3379e1229a"} Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.412695 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.412722 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.430980 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.431087 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.932634 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc16433-b83f-4a33-957a-3587ee2f7893" path="/var/lib/kubelet/pods/4bc16433-b83f-4a33-957a-3587ee2f7893/volumes" Oct 06 15:19:26 crc kubenswrapper[4888]: I1006 15:19:26.937385 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 15:19:27 crc kubenswrapper[4888]: I1006 15:19:27.120517 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75c8ddd69c-4dcbt" podUID="eb24b00b-412e-45ef-a960-5e291746d95e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: i/o timeout" Oct 06 15:19:27 crc kubenswrapper[4888]: I1006 15:19:27.429897 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerStarted","Data":"e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610"} Oct 06 15:19:27 crc kubenswrapper[4888]: I1006 15:19:27.429949 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerStarted","Data":"a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4"} Oct 06 15:19:27 crc kubenswrapper[4888]: I1006 15:19:27.431317 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7176912c-2b8c-46ac-b8e6-8067439a2720","Type":"ContainerStarted","Data":"b922334cefd1d434712b9e0fe020419b161ce8f12071fca39df67e45987bee40"} Oct 06 15:19:27 crc kubenswrapper[4888]: I1006 15:19:27.495333 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 15:19:28 crc kubenswrapper[4888]: I1006 15:19:28.348530 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:28 crc kubenswrapper[4888]: I1006 15:19:28.349115 4888 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 15:19:28 crc kubenswrapper[4888]: I1006 15:19:28.447416 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7176912c-2b8c-46ac-b8e6-8067439a2720","Type":"ContainerStarted","Data":"795a810e3250a0a3cde3dd8600a61461651812a25caa3b16a665fc652819e622"} Oct 06 15:19:28 crc kubenswrapper[4888]: I1006 15:19:28.451514 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerStarted","Data":"16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68"} Oct 06 15:19:29 crc kubenswrapper[4888]: I1006 15:19:29.120514 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 15:19:29 crc kubenswrapper[4888]: I1006 15:19:29.460698 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7176912c-2b8c-46ac-b8e6-8067439a2720","Type":"ContainerStarted","Data":"bb26d4f990c04db6d9c9a42f2ae22caa9c9673a2ee42763c95a1e8d8204ceda8"} Oct 06 15:19:29 crc kubenswrapper[4888]: I1006 15:19:29.487614 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.487597281 podStartE2EDuration="3.487597281s" podCreationTimestamp="2025-10-06 15:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:19:29.48027986 +0000 UTC m=+1109.292630598" watchObservedRunningTime="2025-10-06 15:19:29.487597281 +0000 UTC m=+1109.299947999" Oct 06 15:19:30 crc kubenswrapper[4888]: I1006 15:19:30.551195 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerStarted","Data":"2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381"} Oct 06 15:19:30 crc kubenswrapper[4888]: I1006 15:19:30.551532 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 15:19:30 crc kubenswrapper[4888]: I1006 15:19:30.613008 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.810719452 podStartE2EDuration="6.612991745s" podCreationTimestamp="2025-10-06 15:19:24 +0000 UTC" firstStartedPulling="2025-10-06 15:19:25.683564823 +0000 UTC m=+1105.495915541" lastFinishedPulling="2025-10-06 15:19:29.485837116 +0000 UTC m=+1109.298187834" observedRunningTime="2025-10-06 15:19:30.6036399 +0000 UTC m=+1110.415990618" watchObservedRunningTime="2025-10-06 15:19:30.612991745 +0000 UTC m=+1110.425342463" Oct 06 15:19:30 crc kubenswrapper[4888]: I1006 15:19:30.900148 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:31 crc kubenswrapper[4888]: I1006 15:19:31.315950 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 06 15:19:32 crc kubenswrapper[4888]: I1006 15:19:32.563144 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:19:32 crc kubenswrapper[4888]: I1006 15:19:32.563206 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:19:32 crc kubenswrapper[4888]: I1006 15:19:32.569535 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="sg-core" containerID="cri-o://16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68" gracePeriod=30 Oct 06 15:19:32 crc kubenswrapper[4888]: I1006 15:19:32.569592 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="proxy-httpd" containerID="cri-o://2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381" gracePeriod=30 Oct 06 15:19:32 crc kubenswrapper[4888]: I1006 15:19:32.569684 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="ceilometer-central-agent" containerID="cri-o://a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4" gracePeriod=30 Oct 06 15:19:32 crc kubenswrapper[4888]: I1006 15:19:32.569542 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="ceilometer-notification-agent" containerID="cri-o://e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610" gracePeriod=30 Oct 06 15:19:32 crc kubenswrapper[4888]: I1006 15:19:32.635393 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 06 15:19:33 crc kubenswrapper[4888]: I1006 15:19:33.580831 4888 generic.go:334] "Generic (PLEG): container finished" podID="56e2006b-e85c-4165-84c8-e85f90c38907" containerID="2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381" exitCode=0 Oct 06 15:19:33 crc kubenswrapper[4888]: I1006 15:19:33.581113 4888 generic.go:334] "Generic (PLEG): container finished" podID="56e2006b-e85c-4165-84c8-e85f90c38907" containerID="16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68" exitCode=2 Oct 06 15:19:33 crc kubenswrapper[4888]: I1006 15:19:33.581125 4888 generic.go:334] "Generic (PLEG): container finished" podID="56e2006b-e85c-4165-84c8-e85f90c38907" containerID="e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610" exitCode=0 Oct 06 15:19:33 crc kubenswrapper[4888]: I1006 15:19:33.580874 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerDied","Data":"2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381"} Oct 06 15:19:33 crc kubenswrapper[4888]: I1006 15:19:33.581157 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerDied","Data":"16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68"} Oct 06 15:19:33 crc kubenswrapper[4888]: I1006 15:19:33.581172 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerDied","Data":"e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610"} Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.426744 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pmxgb"] Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.429031 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pmxgb" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.466910 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pmxgb"] Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.500328 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zczh\" (UniqueName: \"kubernetes.io/projected/94e623f3-1187-456b-97e3-d18f0f278c19-kube-api-access-8zczh\") pod \"nova-api-db-create-pmxgb\" (UID: \"94e623f3-1187-456b-97e3-d18f0f278c19\") " pod="openstack/nova-api-db-create-pmxgb" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.601634 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zczh\" (UniqueName: \"kubernetes.io/projected/94e623f3-1187-456b-97e3-d18f0f278c19-kube-api-access-8zczh\") pod \"nova-api-db-create-pmxgb\" (UID: \"94e623f3-1187-456b-97e3-d18f0f278c19\") " pod="openstack/nova-api-db-create-pmxgb" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.643380 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-hzrxr"] Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.644539 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hzrxr" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.664029 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zczh\" (UniqueName: \"kubernetes.io/projected/94e623f3-1187-456b-97e3-d18f0f278c19-kube-api-access-8zczh\") pod \"nova-api-db-create-pmxgb\" (UID: \"94e623f3-1187-456b-97e3-d18f0f278c19\") " pod="openstack/nova-api-db-create-pmxgb" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.706025 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp5lk\" (UniqueName: \"kubernetes.io/projected/a7ea57a3-51e1-49df-a9cb-531568e7867a-kube-api-access-xp5lk\") pod \"nova-cell0-db-create-hzrxr\" (UID: \"a7ea57a3-51e1-49df-a9cb-531568e7867a\") " pod="openstack/nova-cell0-db-create-hzrxr" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.720885 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hzrxr"] Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.750336 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pmxgb" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.761099 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qd58v"] Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.762571 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qd58v" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.809774 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp5lk\" (UniqueName: \"kubernetes.io/projected/a7ea57a3-51e1-49df-a9cb-531568e7867a-kube-api-access-xp5lk\") pod \"nova-cell0-db-create-hzrxr\" (UID: \"a7ea57a3-51e1-49df-a9cb-531568e7867a\") " pod="openstack/nova-cell0-db-create-hzrxr" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.809878 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpjb\" (UniqueName: \"kubernetes.io/projected/d229f482-25b6-4513-a024-45b8556fe4a4-kube-api-access-5tpjb\") pod \"nova-cell1-db-create-qd58v\" (UID: \"d229f482-25b6-4513-a024-45b8556fe4a4\") " pod="openstack/nova-cell1-db-create-qd58v" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.835950 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qd58v"] Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.853485 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp5lk\" (UniqueName: \"kubernetes.io/projected/a7ea57a3-51e1-49df-a9cb-531568e7867a-kube-api-access-xp5lk\") pod \"nova-cell0-db-create-hzrxr\" (UID: \"a7ea57a3-51e1-49df-a9cb-531568e7867a\") " pod="openstack/nova-cell0-db-create-hzrxr" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.911785 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tpjb\" (UniqueName: \"kubernetes.io/projected/d229f482-25b6-4513-a024-45b8556fe4a4-kube-api-access-5tpjb\") pod \"nova-cell1-db-create-qd58v\" (UID: \"d229f482-25b6-4513-a024-45b8556fe4a4\") " pod="openstack/nova-cell1-db-create-qd58v" Oct 06 15:19:36 crc kubenswrapper[4888]: I1006 15:19:36.968594 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tpjb\" (UniqueName: \"kubernetes.io/projected/d229f482-25b6-4513-a024-45b8556fe4a4-kube-api-access-5tpjb\") pod \"nova-cell1-db-create-qd58v\" (UID: \"d229f482-25b6-4513-a024-45b8556fe4a4\") " pod="openstack/nova-cell1-db-create-qd58v" Oct 06 15:19:37 crc kubenswrapper[4888]: I1006 15:19:37.055935 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hzrxr" Oct 06 15:19:37 crc kubenswrapper[4888]: I1006 15:19:37.219395 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qd58v" Oct 06 15:19:37 crc kubenswrapper[4888]: I1006 15:19:37.310148 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 06 15:19:37 crc kubenswrapper[4888]: I1006 15:19:37.525484 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pmxgb"] Oct 06 15:19:37 crc kubenswrapper[4888]: I1006 15:19:37.533605 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hzrxr"] Oct 06 15:19:37 crc kubenswrapper[4888]: W1006 15:19:37.536887 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7ea57a3_51e1_49df_a9cb_531568e7867a.slice/crio-7b2acc1cb5ae786a5dc2937d4e6a6abb966ac65de5f8897fdf877e03648dd420 WatchSource:0}: Error finding container 7b2acc1cb5ae786a5dc2937d4e6a6abb966ac65de5f8897fdf877e03648dd420: Status 404 returned error can't find the container with id 7b2acc1cb5ae786a5dc2937d4e6a6abb966ac65de5f8897fdf877e03648dd420 Oct 06 15:19:37 crc kubenswrapper[4888]: I1006 15:19:37.632860 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hzrxr" event={"ID":"a7ea57a3-51e1-49df-a9cb-531568e7867a","Type":"ContainerStarted","Data":"7b2acc1cb5ae786a5dc2937d4e6a6abb966ac65de5f8897fdf877e03648dd420"} Oct 06 15:19:37 crc kubenswrapper[4888]: I1006 15:19:37.634249 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pmxgb" event={"ID":"94e623f3-1187-456b-97e3-d18f0f278c19","Type":"ContainerStarted","Data":"6bad4b236e1040b23e2780fe9e1f70cd611044e61d7721a883e13f9b2690c235"} Oct 06 15:19:37 crc kubenswrapper[4888]: I1006 15:19:37.846486 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qd58v"] Oct 06 15:19:38 crc kubenswrapper[4888]: I1006 15:19:38.645037 4888 generic.go:334] "Generic (PLEG): container finished" podID="94e623f3-1187-456b-97e3-d18f0f278c19" containerID="4281cc046ec695e7543ebb463a22ff1ccdb80f6108b67e6d442bf93c46a88618" exitCode=0 Oct 06 15:19:38 crc kubenswrapper[4888]: I1006 15:19:38.645162 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pmxgb" event={"ID":"94e623f3-1187-456b-97e3-d18f0f278c19","Type":"ContainerDied","Data":"4281cc046ec695e7543ebb463a22ff1ccdb80f6108b67e6d442bf93c46a88618"} Oct 06 15:19:38 crc kubenswrapper[4888]: I1006 15:19:38.647693 4888 generic.go:334] "Generic (PLEG): container finished" podID="d229f482-25b6-4513-a024-45b8556fe4a4" containerID="4fda17590c9dd1bd00f0d662562cab9f76d3e474f1dc92ac032e82d74abfc5ec" exitCode=0 Oct 06 15:19:38 crc kubenswrapper[4888]: I1006 15:19:38.647738 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qd58v" event={"ID":"d229f482-25b6-4513-a024-45b8556fe4a4","Type":"ContainerDied","Data":"4fda17590c9dd1bd00f0d662562cab9f76d3e474f1dc92ac032e82d74abfc5ec"} Oct 06 15:19:38 crc kubenswrapper[4888]: I1006 15:19:38.647839 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qd58v" event={"ID":"d229f482-25b6-4513-a024-45b8556fe4a4","Type":"ContainerStarted","Data":"7cfb9a3fc4c174d614a7be9f8a7bf879f384c4d594e64bcfb5b574d7a24df0b3"} Oct 06 15:19:38 crc kubenswrapper[4888]: I1006 15:19:38.649533 4888 generic.go:334] "Generic (PLEG): container finished" podID="a7ea57a3-51e1-49df-a9cb-531568e7867a" containerID="d484eb88e93f948470fe3fe31abb4a630f536bc69d8957f395387cc2f0d2ab10" exitCode=0 Oct 06 15:19:38 crc kubenswrapper[4888]: I1006 15:19:38.649580 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hzrxr" event={"ID":"a7ea57a3-51e1-49df-a9cb-531568e7867a","Type":"ContainerDied","Data":"d484eb88e93f948470fe3fe31abb4a630f536bc69d8957f395387cc2f0d2ab10"} Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.304241 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qd58v" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.395353 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tpjb\" (UniqueName: \"kubernetes.io/projected/d229f482-25b6-4513-a024-45b8556fe4a4-kube-api-access-5tpjb\") pod \"d229f482-25b6-4513-a024-45b8556fe4a4\" (UID: \"d229f482-25b6-4513-a024-45b8556fe4a4\") " Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.405625 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d229f482-25b6-4513-a024-45b8556fe4a4-kube-api-access-5tpjb" (OuterVolumeSpecName: "kube-api-access-5tpjb") pod "d229f482-25b6-4513-a024-45b8556fe4a4" (UID: "d229f482-25b6-4513-a024-45b8556fe4a4"). InnerVolumeSpecName "kube-api-access-5tpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.497816 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tpjb\" (UniqueName: \"kubernetes.io/projected/d229f482-25b6-4513-a024-45b8556fe4a4-kube-api-access-5tpjb\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.509468 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pmxgb" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.518478 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hzrxr" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.599114 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp5lk\" (UniqueName: \"kubernetes.io/projected/a7ea57a3-51e1-49df-a9cb-531568e7867a-kube-api-access-xp5lk\") pod \"a7ea57a3-51e1-49df-a9cb-531568e7867a\" (UID: \"a7ea57a3-51e1-49df-a9cb-531568e7867a\") " Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.599402 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zczh\" (UniqueName: \"kubernetes.io/projected/94e623f3-1187-456b-97e3-d18f0f278c19-kube-api-access-8zczh\") pod \"94e623f3-1187-456b-97e3-d18f0f278c19\" (UID: \"94e623f3-1187-456b-97e3-d18f0f278c19\") " Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.603394 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e623f3-1187-456b-97e3-d18f0f278c19-kube-api-access-8zczh" (OuterVolumeSpecName: "kube-api-access-8zczh") pod "94e623f3-1187-456b-97e3-d18f0f278c19" (UID: "94e623f3-1187-456b-97e3-d18f0f278c19"). InnerVolumeSpecName "kube-api-access-8zczh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.605557 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ea57a3-51e1-49df-a9cb-531568e7867a-kube-api-access-xp5lk" (OuterVolumeSpecName: "kube-api-access-xp5lk") pod "a7ea57a3-51e1-49df-a9cb-531568e7867a" (UID: "a7ea57a3-51e1-49df-a9cb-531568e7867a"). InnerVolumeSpecName "kube-api-access-xp5lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.672308 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hzrxr" event={"ID":"a7ea57a3-51e1-49df-a9cb-531568e7867a","Type":"ContainerDied","Data":"7b2acc1cb5ae786a5dc2937d4e6a6abb966ac65de5f8897fdf877e03648dd420"} Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.672350 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b2acc1cb5ae786a5dc2937d4e6a6abb966ac65de5f8897fdf877e03648dd420" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.672346 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hzrxr" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.674353 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pmxgb" event={"ID":"94e623f3-1187-456b-97e3-d18f0f278c19","Type":"ContainerDied","Data":"6bad4b236e1040b23e2780fe9e1f70cd611044e61d7721a883e13f9b2690c235"} Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.674386 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bad4b236e1040b23e2780fe9e1f70cd611044e61d7721a883e13f9b2690c235" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.674444 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pmxgb" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.685115 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qd58v" event={"ID":"d229f482-25b6-4513-a024-45b8556fe4a4","Type":"ContainerDied","Data":"7cfb9a3fc4c174d614a7be9f8a7bf879f384c4d594e64bcfb5b574d7a24df0b3"} Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.685166 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cfb9a3fc4c174d614a7be9f8a7bf879f384c4d594e64bcfb5b574d7a24df0b3" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.685231 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qd58v" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.701858 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zczh\" (UniqueName: \"kubernetes.io/projected/94e623f3-1187-456b-97e3-d18f0f278c19-kube-api-access-8zczh\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.701882 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp5lk\" (UniqueName: \"kubernetes.io/projected/a7ea57a3-51e1-49df-a9cb-531568e7867a-kube-api-access-xp5lk\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:40 crc kubenswrapper[4888]: I1006 15:19:40.990462 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.007286 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-config-data\") pod \"56e2006b-e85c-4165-84c8-e85f90c38907\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.007441 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-scripts\") pod \"56e2006b-e85c-4165-84c8-e85f90c38907\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.007525 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-combined-ca-bundle\") pod \"56e2006b-e85c-4165-84c8-e85f90c38907\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.007623 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnjbk\" (UniqueName: \"kubernetes.io/projected/56e2006b-e85c-4165-84c8-e85f90c38907-kube-api-access-dnjbk\") pod \"56e2006b-e85c-4165-84c8-e85f90c38907\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.007646 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-log-httpd\") pod \"56e2006b-e85c-4165-84c8-e85f90c38907\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.007669 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-run-httpd\") pod \"56e2006b-e85c-4165-84c8-e85f90c38907\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.007717 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-sg-core-conf-yaml\") pod \"56e2006b-e85c-4165-84c8-e85f90c38907\" (UID: \"56e2006b-e85c-4165-84c8-e85f90c38907\") " Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.011693 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "56e2006b-e85c-4165-84c8-e85f90c38907" (UID: "56e2006b-e85c-4165-84c8-e85f90c38907"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.012978 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "56e2006b-e85c-4165-84c8-e85f90c38907" (UID: "56e2006b-e85c-4165-84c8-e85f90c38907"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.021541 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-scripts" (OuterVolumeSpecName: "scripts") pod "56e2006b-e85c-4165-84c8-e85f90c38907" (UID: "56e2006b-e85c-4165-84c8-e85f90c38907"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.026345 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e2006b-e85c-4165-84c8-e85f90c38907-kube-api-access-dnjbk" (OuterVolumeSpecName: "kube-api-access-dnjbk") pod "56e2006b-e85c-4165-84c8-e85f90c38907" (UID: "56e2006b-e85c-4165-84c8-e85f90c38907"). InnerVolumeSpecName "kube-api-access-dnjbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.040464 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "56e2006b-e85c-4165-84c8-e85f90c38907" (UID: "56e2006b-e85c-4165-84c8-e85f90c38907"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.110697 4888 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.111968 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.112046 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnjbk\" (UniqueName: \"kubernetes.io/projected/56e2006b-e85c-4165-84c8-e85f90c38907-kube-api-access-dnjbk\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.112105 4888 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.112188 4888 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56e2006b-e85c-4165-84c8-e85f90c38907-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.117448 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56e2006b-e85c-4165-84c8-e85f90c38907" (UID: "56e2006b-e85c-4165-84c8-e85f90c38907"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.153614 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-config-data" (OuterVolumeSpecName: "config-data") pod "56e2006b-e85c-4165-84c8-e85f90c38907" (UID: "56e2006b-e85c-4165-84c8-e85f90c38907"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.214658 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.214696 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56e2006b-e85c-4165-84c8-e85f90c38907-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.698066 4888 generic.go:334] "Generic (PLEG): container finished" podID="56e2006b-e85c-4165-84c8-e85f90c38907" containerID="a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4" exitCode=0 Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.698117 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.699117 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerDied","Data":"a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4"} Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.699254 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"56e2006b-e85c-4165-84c8-e85f90c38907","Type":"ContainerDied","Data":"0b37b499e729af1c1fa8ee38488752451677bd97fef6054f4aaf8f3379e1229a"} Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.699359 4888 scope.go:117] "RemoveContainer" containerID="2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.720475 4888 scope.go:117] "RemoveContainer" containerID="16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.737644 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.746070 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.757562 4888 scope.go:117] "RemoveContainer" containerID="e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774072 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.774534 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="proxy-httpd" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774552 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="proxy-httpd" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.774569 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="ceilometer-central-agent" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774576 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="ceilometer-central-agent" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.774593 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="ceilometer-notification-agent" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774604 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="ceilometer-notification-agent" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.774613 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ea57a3-51e1-49df-a9cb-531568e7867a" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774620 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ea57a3-51e1-49df-a9cb-531568e7867a" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.774634 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d229f482-25b6-4513-a024-45b8556fe4a4" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774642 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="d229f482-25b6-4513-a024-45b8556fe4a4" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.774655 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="sg-core" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774682 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="sg-core" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.774711 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94e623f3-1187-456b-97e3-d18f0f278c19" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774720 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="94e623f3-1187-456b-97e3-d18f0f278c19" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774942 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="ceilometer-notification-agent" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774957 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="d229f482-25b6-4513-a024-45b8556fe4a4" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774966 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="ceilometer-central-agent" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.774977 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="94e623f3-1187-456b-97e3-d18f0f278c19" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.775009 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="sg-core" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.775021 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" containerName="proxy-httpd" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.775035 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ea57a3-51e1-49df-a9cb-531568e7867a" containerName="mariadb-database-create" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.777105 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.782763 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.782993 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.787337 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.797416 4888 scope.go:117] "RemoveContainer" containerID="a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.826287 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-scripts\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.826389 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjsst\" (UniqueName: \"kubernetes.io/projected/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-kube-api-access-xjsst\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.826412 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.826443 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-config-data\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.826466 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-log-httpd\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.826519 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-run-httpd\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.826545 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.827449 4888 scope.go:117] "RemoveContainer" containerID="2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.828027 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381\": container with ID starting with 2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381 not found: ID does not exist" containerID="2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.828053 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381"} err="failed to get container status \"2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381\": rpc error: code = NotFound desc = could not find container \"2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381\": container with ID starting with 2ef0e6d8080c3242e9285c494d501cfd2201dd0fd7b77a2b83ccc285d6847381 not found: ID does not exist" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.828075 4888 scope.go:117] "RemoveContainer" containerID="16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.828394 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68\": container with ID starting with 16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68 not found: ID does not exist" containerID="16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.828415 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68"} err="failed to get container status \"16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68\": rpc error: code = NotFound desc = could not find container \"16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68\": container with ID starting with 16781df2e318fccd0c412e1a9e845cb85a615ab56b7432e8a8b65bbd39f73b68 not found: ID does not exist" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.828428 4888 scope.go:117] "RemoveContainer" containerID="e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.828714 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610\": container with ID starting with e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610 not found: ID does not exist" containerID="e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.828734 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610"} err="failed to get container status \"e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610\": rpc error: code = NotFound desc = could not find container \"e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610\": container with ID starting with e843aafefa0557575394a928533ff7cafcb522e5ffec780956d6ec72f8bb7610 not found: ID does not exist" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.828748 4888 scope.go:117] "RemoveContainer" containerID="a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4" Oct 06 15:19:41 crc kubenswrapper[4888]: E1006 15:19:41.829000 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4\": container with ID starting with a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4 not found: ID does not exist" containerID="a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.829018 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4"} err="failed to get container status \"a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4\": rpc error: code = NotFound desc = could not find container \"a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4\": container with ID starting with a79486f79e3849d9214bb79020695de38da30e5d8fd330a3e613d2feb38515b4 not found: ID does not exist" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.928425 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-scripts\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.928533 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjsst\" (UniqueName: \"kubernetes.io/projected/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-kube-api-access-xjsst\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.928564 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.928585 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-config-data\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.928614 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-log-httpd\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.929682 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-run-httpd\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.929827 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.929275 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-log-httpd\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.932576 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-run-httpd\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.936395 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.936436 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-scripts\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.941306 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.950631 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-config-data\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:41 crc kubenswrapper[4888]: I1006 15:19:41.958021 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjsst\" (UniqueName: \"kubernetes.io/projected/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-kube-api-access-xjsst\") pod \"ceilometer-0\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " pod="openstack/ceilometer-0" Oct 06 15:19:42 crc kubenswrapper[4888]: I1006 15:19:42.101817 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:19:42 crc kubenswrapper[4888]: I1006 15:19:42.577142 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:42 crc kubenswrapper[4888]: I1006 15:19:42.707710 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerStarted","Data":"54e14de599e34a5a40a1f1d4fc66bf1c00f056ef37fb3020060804b5262e5c25"} Oct 06 15:19:42 crc kubenswrapper[4888]: I1006 15:19:42.938210 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e2006b-e85c-4165-84c8-e85f90c38907" path="/var/lib/kubelet/pods/56e2006b-e85c-4165-84c8-e85f90c38907/volumes" Oct 06 15:19:43 crc kubenswrapper[4888]: I1006 15:19:43.718935 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerStarted","Data":"a7714aed7ad6f414259f0c98d8cba7c3489527ea8e2b3a991052aed1a1a1d514"} Oct 06 15:19:44 crc kubenswrapper[4888]: I1006 15:19:44.730888 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerStarted","Data":"a11a2394ba12a0b8f3d540508fc39e493bc8329257855b4ec0135a200ebf96f9"} Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.760547 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b448-account-create-b9xtx"] Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.762361 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b448-account-create-b9xtx" Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.767143 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.782536 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b448-account-create-b9xtx"] Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.797073 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerStarted","Data":"c9e6ee17567fc2d08e3aa0afca942f53845e7e98c657abd2250443385951500f"} Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.824027 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrk29\" (UniqueName: \"kubernetes.io/projected/dd825252-f2fb-4127-b909-9fa9be7d6d39-kube-api-access-vrk29\") pod \"nova-cell0-b448-account-create-b9xtx\" (UID: \"dd825252-f2fb-4127-b909-9fa9be7d6d39\") " pod="openstack/nova-cell0-b448-account-create-b9xtx" Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.926294 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrk29\" (UniqueName: \"kubernetes.io/projected/dd825252-f2fb-4127-b909-9fa9be7d6d39-kube-api-access-vrk29\") pod \"nova-cell0-b448-account-create-b9xtx\" (UID: \"dd825252-f2fb-4127-b909-9fa9be7d6d39\") " pod="openstack/nova-cell0-b448-account-create-b9xtx" Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.941387 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-59e6-account-create-zd76t"] Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.942734 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59e6-account-create-zd76t" Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.949852 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.951089 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-59e6-account-create-zd76t"] Oct 06 15:19:46 crc kubenswrapper[4888]: I1006 15:19:46.974509 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrk29\" (UniqueName: \"kubernetes.io/projected/dd825252-f2fb-4127-b909-9fa9be7d6d39-kube-api-access-vrk29\") pod \"nova-cell0-b448-account-create-b9xtx\" (UID: \"dd825252-f2fb-4127-b909-9fa9be7d6d39\") " pod="openstack/nova-cell0-b448-account-create-b9xtx" Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.029772 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjr45\" (UniqueName: \"kubernetes.io/projected/1ccea228-64b2-4ec5-a967-31d1430a8614-kube-api-access-sjr45\") pod \"nova-cell1-59e6-account-create-zd76t\" (UID: \"1ccea228-64b2-4ec5-a967-31d1430a8614\") " pod="openstack/nova-cell1-59e6-account-create-zd76t" Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.091413 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b448-account-create-b9xtx" Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.135856 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjr45\" (UniqueName: \"kubernetes.io/projected/1ccea228-64b2-4ec5-a967-31d1430a8614-kube-api-access-sjr45\") pod \"nova-cell1-59e6-account-create-zd76t\" (UID: \"1ccea228-64b2-4ec5-a967-31d1430a8614\") " pod="openstack/nova-cell1-59e6-account-create-zd76t" Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.154177 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjr45\" (UniqueName: \"kubernetes.io/projected/1ccea228-64b2-4ec5-a967-31d1430a8614-kube-api-access-sjr45\") pod \"nova-cell1-59e6-account-create-zd76t\" (UID: \"1ccea228-64b2-4ec5-a967-31d1430a8614\") " pod="openstack/nova-cell1-59e6-account-create-zd76t" Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.266753 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59e6-account-create-zd76t" Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.710783 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b448-account-create-b9xtx"] Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.809053 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerStarted","Data":"b149c8cf9e38468dbfcc2c00a5300001419c555fd5ed8a4ddcce06e8f60b700e"} Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.809439 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.811562 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b448-account-create-b9xtx" event={"ID":"dd825252-f2fb-4127-b909-9fa9be7d6d39","Type":"ContainerStarted","Data":"48f191db466b34973930bf821411bd70e0cf6e724ad3731669ba6e856bc969fc"} Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.833280 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.380185935 podStartE2EDuration="6.833256635s" podCreationTimestamp="2025-10-06 15:19:41 +0000 UTC" firstStartedPulling="2025-10-06 15:19:42.580655905 +0000 UTC m=+1122.393006623" lastFinishedPulling="2025-10-06 15:19:47.033726615 +0000 UTC m=+1126.846077323" observedRunningTime="2025-10-06 15:19:47.829721115 +0000 UTC m=+1127.642071853" watchObservedRunningTime="2025-10-06 15:19:47.833256635 +0000 UTC m=+1127.645607363" Oct 06 15:19:47 crc kubenswrapper[4888]: I1006 15:19:47.903082 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-59e6-account-create-zd76t"] Oct 06 15:19:47 crc kubenswrapper[4888]: W1006 15:19:47.909375 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ccea228_64b2_4ec5_a967_31d1430a8614.slice/crio-69b0236563293a788ce8f9c27ad1b3fecdea5a992b515e3e647f54a7f0e55f57 WatchSource:0}: Error finding container 69b0236563293a788ce8f9c27ad1b3fecdea5a992b515e3e647f54a7f0e55f57: Status 404 returned error can't find the container with id 69b0236563293a788ce8f9c27ad1b3fecdea5a992b515e3e647f54a7f0e55f57 Oct 06 15:19:48 crc kubenswrapper[4888]: I1006 15:19:48.822191 4888 generic.go:334] "Generic (PLEG): container finished" podID="dd825252-f2fb-4127-b909-9fa9be7d6d39" containerID="5db4f16a31b5f00b1351bc57bf8e218e803f66df4667b77a6ab63e536f81f1d2" exitCode=0 Oct 06 15:19:48 crc kubenswrapper[4888]: I1006 15:19:48.822250 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b448-account-create-b9xtx" event={"ID":"dd825252-f2fb-4127-b909-9fa9be7d6d39","Type":"ContainerDied","Data":"5db4f16a31b5f00b1351bc57bf8e218e803f66df4667b77a6ab63e536f81f1d2"} Oct 06 15:19:48 crc kubenswrapper[4888]: I1006 15:19:48.824812 4888 generic.go:334] "Generic (PLEG): container finished" podID="1ccea228-64b2-4ec5-a967-31d1430a8614" containerID="eca652b7a8d8d03eeca70d3351a83368318a5bb003d8f25f125dd15a707163bb" exitCode=0 Oct 06 15:19:48 crc kubenswrapper[4888]: I1006 15:19:48.824869 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59e6-account-create-zd76t" event={"ID":"1ccea228-64b2-4ec5-a967-31d1430a8614","Type":"ContainerDied","Data":"eca652b7a8d8d03eeca70d3351a83368318a5bb003d8f25f125dd15a707163bb"} Oct 06 15:19:48 crc kubenswrapper[4888]: I1006 15:19:48.824911 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59e6-account-create-zd76t" event={"ID":"1ccea228-64b2-4ec5-a967-31d1430a8614","Type":"ContainerStarted","Data":"69b0236563293a788ce8f9c27ad1b3fecdea5a992b515e3e647f54a7f0e55f57"} Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.278431 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59e6-account-create-zd76t" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.285644 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b448-account-create-b9xtx" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.391373 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrk29\" (UniqueName: \"kubernetes.io/projected/dd825252-f2fb-4127-b909-9fa9be7d6d39-kube-api-access-vrk29\") pod \"dd825252-f2fb-4127-b909-9fa9be7d6d39\" (UID: \"dd825252-f2fb-4127-b909-9fa9be7d6d39\") " Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.391441 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjr45\" (UniqueName: \"kubernetes.io/projected/1ccea228-64b2-4ec5-a967-31d1430a8614-kube-api-access-sjr45\") pod \"1ccea228-64b2-4ec5-a967-31d1430a8614\" (UID: \"1ccea228-64b2-4ec5-a967-31d1430a8614\") " Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.397406 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd825252-f2fb-4127-b909-9fa9be7d6d39-kube-api-access-vrk29" (OuterVolumeSpecName: "kube-api-access-vrk29") pod "dd825252-f2fb-4127-b909-9fa9be7d6d39" (UID: "dd825252-f2fb-4127-b909-9fa9be7d6d39"). InnerVolumeSpecName "kube-api-access-vrk29". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.397924 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccea228-64b2-4ec5-a967-31d1430a8614-kube-api-access-sjr45" (OuterVolumeSpecName: "kube-api-access-sjr45") pod "1ccea228-64b2-4ec5-a967-31d1430a8614" (UID: "1ccea228-64b2-4ec5-a967-31d1430a8614"). InnerVolumeSpecName "kube-api-access-sjr45". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.494588 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrk29\" (UniqueName: \"kubernetes.io/projected/dd825252-f2fb-4127-b909-9fa9be7d6d39-kube-api-access-vrk29\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.494635 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjr45\" (UniqueName: \"kubernetes.io/projected/1ccea228-64b2-4ec5-a967-31d1430a8614-kube-api-access-sjr45\") on node \"crc\" DevicePath \"\"" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.843192 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59e6-account-create-zd76t" event={"ID":"1ccea228-64b2-4ec5-a967-31d1430a8614","Type":"ContainerDied","Data":"69b0236563293a788ce8f9c27ad1b3fecdea5a992b515e3e647f54a7f0e55f57"} Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.843238 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b0236563293a788ce8f9c27ad1b3fecdea5a992b515e3e647f54a7f0e55f57" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.843298 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59e6-account-create-zd76t" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.851008 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b448-account-create-b9xtx" event={"ID":"dd825252-f2fb-4127-b909-9fa9be7d6d39","Type":"ContainerDied","Data":"48f191db466b34973930bf821411bd70e0cf6e724ad3731669ba6e856bc969fc"} Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.851063 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f191db466b34973930bf821411bd70e0cf6e724ad3731669ba6e856bc969fc" Oct 06 15:19:50 crc kubenswrapper[4888]: I1006 15:19:50.851107 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b448-account-create-b9xtx" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.021103 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-snfqf"] Oct 06 15:19:52 crc kubenswrapper[4888]: E1006 15:19:52.021812 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd825252-f2fb-4127-b909-9fa9be7d6d39" containerName="mariadb-account-create" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.021830 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd825252-f2fb-4127-b909-9fa9be7d6d39" containerName="mariadb-account-create" Oct 06 15:19:52 crc kubenswrapper[4888]: E1006 15:19:52.021860 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccea228-64b2-4ec5-a967-31d1430a8614" containerName="mariadb-account-create" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.021871 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccea228-64b2-4ec5-a967-31d1430a8614" containerName="mariadb-account-create" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.022077 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ccea228-64b2-4ec5-a967-31d1430a8614" containerName="mariadb-account-create" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.022103 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd825252-f2fb-4127-b909-9fa9be7d6d39" containerName="mariadb-account-create" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.022913 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.031319 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-72h2t" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.034698 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-snfqf"] Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.036101 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.036381 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.127282 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.127550 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-scripts\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.127741 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-config-data\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.127882 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lx79\" (UniqueName: \"kubernetes.io/projected/28b31342-1634-47b6-ac0e-a6f4937111f7-kube-api-access-2lx79\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.229539 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-scripts\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.229625 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-config-data\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.229704 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lx79\" (UniqueName: \"kubernetes.io/projected/28b31342-1634-47b6-ac0e-a6f4937111f7-kube-api-access-2lx79\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.229737 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.235837 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.244811 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-scripts\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.249618 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-config-data\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.264419 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lx79\" (UniqueName: \"kubernetes.io/projected/28b31342-1634-47b6-ac0e-a6f4937111f7-kube-api-access-2lx79\") pod \"nova-cell0-conductor-db-sync-snfqf\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.344876 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.837650 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-snfqf"] Oct 06 15:19:52 crc kubenswrapper[4888]: I1006 15:19:52.869305 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-snfqf" event={"ID":"28b31342-1634-47b6-ac0e-a6f4937111f7","Type":"ContainerStarted","Data":"79f3e961c09564530ec7e14154439c8aa26178667759f9ff8dd5e0c35dbb393f"} Oct 06 15:19:56 crc kubenswrapper[4888]: I1006 15:19:56.455833 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0ed5-account-create-8k69s"] Oct 06 15:19:56 crc kubenswrapper[4888]: I1006 15:19:56.457590 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0ed5-account-create-8k69s" Oct 06 15:19:56 crc kubenswrapper[4888]: I1006 15:19:56.459429 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 06 15:19:56 crc kubenswrapper[4888]: I1006 15:19:56.464973 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0ed5-account-create-8k69s"] Oct 06 15:19:56 crc kubenswrapper[4888]: I1006 15:19:56.617596 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljt6d\" (UniqueName: \"kubernetes.io/projected/71c1c65d-3a97-48cc-8377-cbc4cb23ddac-kube-api-access-ljt6d\") pod \"nova-api-0ed5-account-create-8k69s\" (UID: \"71c1c65d-3a97-48cc-8377-cbc4cb23ddac\") " pod="openstack/nova-api-0ed5-account-create-8k69s" Oct 06 15:19:56 crc kubenswrapper[4888]: I1006 15:19:56.719159 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljt6d\" (UniqueName: \"kubernetes.io/projected/71c1c65d-3a97-48cc-8377-cbc4cb23ddac-kube-api-access-ljt6d\") pod \"nova-api-0ed5-account-create-8k69s\" (UID: \"71c1c65d-3a97-48cc-8377-cbc4cb23ddac\") " pod="openstack/nova-api-0ed5-account-create-8k69s" Oct 06 15:19:56 crc kubenswrapper[4888]: I1006 15:19:56.738957 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljt6d\" (UniqueName: \"kubernetes.io/projected/71c1c65d-3a97-48cc-8377-cbc4cb23ddac-kube-api-access-ljt6d\") pod \"nova-api-0ed5-account-create-8k69s\" (UID: \"71c1c65d-3a97-48cc-8377-cbc4cb23ddac\") " pod="openstack/nova-api-0ed5-account-create-8k69s" Oct 06 15:19:56 crc kubenswrapper[4888]: I1006 15:19:56.843515 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0ed5-account-create-8k69s" Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.100526 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.103401 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="ceilometer-central-agent" containerID="cri-o://a7714aed7ad6f414259f0c98d8cba7c3489527ea8e2b3a991052aed1a1a1d514" gracePeriod=30 Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.103943 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="ceilometer-notification-agent" containerID="cri-o://a11a2394ba12a0b8f3d540508fc39e493bc8329257855b4ec0135a200ebf96f9" gracePeriod=30 Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.103984 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="proxy-httpd" containerID="cri-o://b149c8cf9e38468dbfcc2c00a5300001419c555fd5ed8a4ddcce06e8f60b700e" gracePeriod=30 Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.103984 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="sg-core" containerID="cri-o://c9e6ee17567fc2d08e3aa0afca942f53845e7e98c657abd2250443385951500f" gracePeriod=30 Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.112990 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.179:3000/\": EOF" Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.948475 4888 generic.go:334] "Generic (PLEG): container finished" podID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerID="b149c8cf9e38468dbfcc2c00a5300001419c555fd5ed8a4ddcce06e8f60b700e" exitCode=0 Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.948766 4888 generic.go:334] "Generic (PLEG): container finished" podID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerID="c9e6ee17567fc2d08e3aa0afca942f53845e7e98c657abd2250443385951500f" exitCode=2 Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.948778 4888 generic.go:334] "Generic (PLEG): container finished" podID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerID="a7714aed7ad6f414259f0c98d8cba7c3489527ea8e2b3a991052aed1a1a1d514" exitCode=0 Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.948680 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerDied","Data":"b149c8cf9e38468dbfcc2c00a5300001419c555fd5ed8a4ddcce06e8f60b700e"} Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.948833 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerDied","Data":"c9e6ee17567fc2d08e3aa0afca942f53845e7e98c657abd2250443385951500f"} Oct 06 15:19:58 crc kubenswrapper[4888]: I1006 15:19:58.948851 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerDied","Data":"a7714aed7ad6f414259f0c98d8cba7c3489527ea8e2b3a991052aed1a1a1d514"} Oct 06 15:20:02 crc kubenswrapper[4888]: I1006 15:20:02.563662 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:20:02 crc kubenswrapper[4888]: I1006 15:20:02.564207 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:20:02 crc kubenswrapper[4888]: I1006 15:20:02.564251 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:20:02 crc kubenswrapper[4888]: I1006 15:20:02.564976 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bf18ef6eff916382fcaa294f56d74e00f198381baa7886ed31f9974dc677b14"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:20:02 crc kubenswrapper[4888]: I1006 15:20:02.565035 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://2bf18ef6eff916382fcaa294f56d74e00f198381baa7886ed31f9974dc677b14" gracePeriod=600 Oct 06 15:20:02 crc kubenswrapper[4888]: I1006 15:20:02.577527 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0ed5-account-create-8k69s"] Oct 06 15:20:02 crc kubenswrapper[4888]: W1006 15:20:02.584172 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71c1c65d_3a97_48cc_8377_cbc4cb23ddac.slice/crio-b0f7a508ddc46d4aaf4530bbb9061e93bceb1b679d5eba166511f23a7b04abc1 WatchSource:0}: Error finding container b0f7a508ddc46d4aaf4530bbb9061e93bceb1b679d5eba166511f23a7b04abc1: Status 404 returned error can't find the container with id b0f7a508ddc46d4aaf4530bbb9061e93bceb1b679d5eba166511f23a7b04abc1 Oct 06 15:20:02 crc kubenswrapper[4888]: I1006 15:20:02.588593 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.038319 4888 generic.go:334] "Generic (PLEG): container finished" podID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerID="a11a2394ba12a0b8f3d540508fc39e493bc8329257855b4ec0135a200ebf96f9" exitCode=0 Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.038466 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerDied","Data":"a11a2394ba12a0b8f3d540508fc39e493bc8329257855b4ec0135a200ebf96f9"} Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.054256 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="2bf18ef6eff916382fcaa294f56d74e00f198381baa7886ed31f9974dc677b14" exitCode=0 Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.054342 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"2bf18ef6eff916382fcaa294f56d74e00f198381baa7886ed31f9974dc677b14"} Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.054373 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"899ff317fa3e35b62f992c01acf0a8a3a07cd1a84a2e8f800bd40f108362572b"} Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.054391 4888 scope.go:117] "RemoveContainer" containerID="c7f872a375e0d5fa3a0376b8ecf93b05be1a27ff35604df3b986a455e732259f" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.061203 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-snfqf" event={"ID":"28b31342-1634-47b6-ac0e-a6f4937111f7","Type":"ContainerStarted","Data":"df6290499d34c46bcce9be0a770a6105880b511dd603638463d2d0467a94f87f"} Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.071735 4888 generic.go:334] "Generic (PLEG): container finished" podID="71c1c65d-3a97-48cc-8377-cbc4cb23ddac" containerID="a591a4654ae45b59eddaf24afb8d08d20fb3aa55d56a7b0cd3bb8d157653929f" exitCode=0 Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.071786 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0ed5-account-create-8k69s" event={"ID":"71c1c65d-3a97-48cc-8377-cbc4cb23ddac","Type":"ContainerDied","Data":"a591a4654ae45b59eddaf24afb8d08d20fb3aa55d56a7b0cd3bb8d157653929f"} Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.071829 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0ed5-account-create-8k69s" event={"ID":"71c1c65d-3a97-48cc-8377-cbc4cb23ddac","Type":"ContainerStarted","Data":"b0f7a508ddc46d4aaf4530bbb9061e93bceb1b679d5eba166511f23a7b04abc1"} Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.155708 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-snfqf" podStartSLOduration=2.013254284 podStartE2EDuration="11.155687314s" podCreationTimestamp="2025-10-06 15:19:52 +0000 UTC" firstStartedPulling="2025-10-06 15:19:52.85039491 +0000 UTC m=+1132.662745628" lastFinishedPulling="2025-10-06 15:20:01.99282794 +0000 UTC m=+1141.805178658" observedRunningTime="2025-10-06 15:20:03.130445769 +0000 UTC m=+1142.942796487" watchObservedRunningTime="2025-10-06 15:20:03.155687314 +0000 UTC m=+1142.968038032" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.203095 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.367978 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-run-httpd\") pod \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.368136 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-combined-ca-bundle\") pod \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.368344 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab7b6868-40b7-4844-ac6a-c5f26b7421b9" (UID: "ab7b6868-40b7-4844-ac6a-c5f26b7421b9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.368768 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-log-httpd\") pod \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.368821 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-config-data\") pod \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.368924 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-scripts\") pod \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.368987 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjsst\" (UniqueName: \"kubernetes.io/projected/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-kube-api-access-xjsst\") pod \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.369023 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-sg-core-conf-yaml\") pod \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\" (UID: \"ab7b6868-40b7-4844-ac6a-c5f26b7421b9\") " Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.369481 4888 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.370460 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab7b6868-40b7-4844-ac6a-c5f26b7421b9" (UID: "ab7b6868-40b7-4844-ac6a-c5f26b7421b9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.376526 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-kube-api-access-xjsst" (OuterVolumeSpecName: "kube-api-access-xjsst") pod "ab7b6868-40b7-4844-ac6a-c5f26b7421b9" (UID: "ab7b6868-40b7-4844-ac6a-c5f26b7421b9"). InnerVolumeSpecName "kube-api-access-xjsst". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.377507 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-scripts" (OuterVolumeSpecName: "scripts") pod "ab7b6868-40b7-4844-ac6a-c5f26b7421b9" (UID: "ab7b6868-40b7-4844-ac6a-c5f26b7421b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.406435 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab7b6868-40b7-4844-ac6a-c5f26b7421b9" (UID: "ab7b6868-40b7-4844-ac6a-c5f26b7421b9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.465091 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab7b6868-40b7-4844-ac6a-c5f26b7421b9" (UID: "ab7b6868-40b7-4844-ac6a-c5f26b7421b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.471182 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.471358 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjsst\" (UniqueName: \"kubernetes.io/projected/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-kube-api-access-xjsst\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.471419 4888 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.471506 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.471562 4888 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.476310 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-config-data" (OuterVolumeSpecName: "config-data") pod "ab7b6868-40b7-4844-ac6a-c5f26b7421b9" (UID: "ab7b6868-40b7-4844-ac6a-c5f26b7421b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:03 crc kubenswrapper[4888]: I1006 15:20:03.573698 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7b6868-40b7-4844-ac6a-c5f26b7421b9-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.085539 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.086766 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab7b6868-40b7-4844-ac6a-c5f26b7421b9","Type":"ContainerDied","Data":"54e14de599e34a5a40a1f1d4fc66bf1c00f056ef37fb3020060804b5262e5c25"} Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.086819 4888 scope.go:117] "RemoveContainer" containerID="b149c8cf9e38468dbfcc2c00a5300001419c555fd5ed8a4ddcce06e8f60b700e" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.131664 4888 scope.go:117] "RemoveContainer" containerID="c9e6ee17567fc2d08e3aa0afca942f53845e7e98c657abd2250443385951500f" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.137393 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.148323 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.177417 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:04 crc kubenswrapper[4888]: E1006 15:20:04.178301 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="ceilometer-central-agent" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.179357 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="ceilometer-central-agent" Oct 06 15:20:04 crc kubenswrapper[4888]: E1006 15:20:04.180146 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="ceilometer-notification-agent" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.180501 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="ceilometer-notification-agent" Oct 06 15:20:04 crc kubenswrapper[4888]: E1006 15:20:04.180603 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="sg-core" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.180668 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="sg-core" Oct 06 15:20:04 crc kubenswrapper[4888]: E1006 15:20:04.180743 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="proxy-httpd" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.180825 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="proxy-httpd" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.181171 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="ceilometer-notification-agent" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.182776 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="sg-core" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.182857 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="ceilometer-central-agent" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.182944 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" containerName="proxy-httpd" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.188968 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.193607 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.193860 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.197097 4888 scope.go:117] "RemoveContainer" containerID="a11a2394ba12a0b8f3d540508fc39e493bc8329257855b4ec0135a200ebf96f9" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.197789 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.242144 4888 scope.go:117] "RemoveContainer" containerID="a7714aed7ad6f414259f0c98d8cba7c3489527ea8e2b3a991052aed1a1a1d514" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.288539 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.288615 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-run-httpd\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.288671 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-scripts\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.288695 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.288760 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-config-data\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.288846 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-log-httpd\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.294147 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpn7d\" (UniqueName: \"kubernetes.io/projected/4735cf9f-0b78-4094-8431-f15fe0bf7d34-kube-api-access-dpn7d\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.397206 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.397262 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-run-httpd\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.397311 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-scripts\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.397330 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.397383 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-config-data\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.397415 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-log-httpd\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.397458 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpn7d\" (UniqueName: \"kubernetes.io/projected/4735cf9f-0b78-4094-8431-f15fe0bf7d34-kube-api-access-dpn7d\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.397771 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-run-httpd\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.398671 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-log-httpd\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.403723 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.404183 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.412709 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-scripts\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.416781 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-config-data\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.419085 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpn7d\" (UniqueName: \"kubernetes.io/projected/4735cf9f-0b78-4094-8431-f15fe0bf7d34-kube-api-access-dpn7d\") pod \"ceilometer-0\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.539991 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.564620 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0ed5-account-create-8k69s" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.733219 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljt6d\" (UniqueName: \"kubernetes.io/projected/71c1c65d-3a97-48cc-8377-cbc4cb23ddac-kube-api-access-ljt6d\") pod \"71c1c65d-3a97-48cc-8377-cbc4cb23ddac\" (UID: \"71c1c65d-3a97-48cc-8377-cbc4cb23ddac\") " Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.738428 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c1c65d-3a97-48cc-8377-cbc4cb23ddac-kube-api-access-ljt6d" (OuterVolumeSpecName: "kube-api-access-ljt6d") pod "71c1c65d-3a97-48cc-8377-cbc4cb23ddac" (UID: "71c1c65d-3a97-48cc-8377-cbc4cb23ddac"). InnerVolumeSpecName "kube-api-access-ljt6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.835473 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljt6d\" (UniqueName: \"kubernetes.io/projected/71c1c65d-3a97-48cc-8377-cbc4cb23ddac-kube-api-access-ljt6d\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:04 crc kubenswrapper[4888]: I1006 15:20:04.932150 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7b6868-40b7-4844-ac6a-c5f26b7421b9" path="/var/lib/kubelet/pods/ab7b6868-40b7-4844-ac6a-c5f26b7421b9/volumes" Oct 06 15:20:05 crc kubenswrapper[4888]: I1006 15:20:05.068667 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:05 crc kubenswrapper[4888]: I1006 15:20:05.101968 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerStarted","Data":"51ef018b1fa1fd76f4135b02edd5850583e21120d179499dc77b6e6246af16ad"} Oct 06 15:20:05 crc kubenswrapper[4888]: I1006 15:20:05.103449 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0ed5-account-create-8k69s" event={"ID":"71c1c65d-3a97-48cc-8377-cbc4cb23ddac","Type":"ContainerDied","Data":"b0f7a508ddc46d4aaf4530bbb9061e93bceb1b679d5eba166511f23a7b04abc1"} Oct 06 15:20:05 crc kubenswrapper[4888]: I1006 15:20:05.103471 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f7a508ddc46d4aaf4530bbb9061e93bceb1b679d5eba166511f23a7b04abc1" Oct 06 15:20:05 crc kubenswrapper[4888]: I1006 15:20:05.103515 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0ed5-account-create-8k69s" Oct 06 15:20:06 crc kubenswrapper[4888]: I1006 15:20:06.114283 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerStarted","Data":"07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78"} Oct 06 15:20:07 crc kubenswrapper[4888]: I1006 15:20:07.137097 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerStarted","Data":"b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c"} Oct 06 15:20:08 crc kubenswrapper[4888]: I1006 15:20:08.154976 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerStarted","Data":"ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4"} Oct 06 15:20:09 crc kubenswrapper[4888]: I1006 15:20:09.170430 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerStarted","Data":"01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a"} Oct 06 15:20:09 crc kubenswrapper[4888]: I1006 15:20:09.171946 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 15:20:09 crc kubenswrapper[4888]: I1006 15:20:09.201121 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.4130022580000001 podStartE2EDuration="5.201100594s" podCreationTimestamp="2025-10-06 15:20:04 +0000 UTC" firstStartedPulling="2025-10-06 15:20:05.063311795 +0000 UTC m=+1144.875662513" lastFinishedPulling="2025-10-06 15:20:08.851410131 +0000 UTC m=+1148.663760849" observedRunningTime="2025-10-06 15:20:09.191270584 +0000 UTC m=+1149.003621302" watchObservedRunningTime="2025-10-06 15:20:09.201100594 +0000 UTC m=+1149.013451312" Oct 06 15:20:15 crc kubenswrapper[4888]: I1006 15:20:15.225956 4888 generic.go:334] "Generic (PLEG): container finished" podID="28b31342-1634-47b6-ac0e-a6f4937111f7" containerID="df6290499d34c46bcce9be0a770a6105880b511dd603638463d2d0467a94f87f" exitCode=0 Oct 06 15:20:15 crc kubenswrapper[4888]: I1006 15:20:15.226034 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-snfqf" event={"ID":"28b31342-1634-47b6-ac0e-a6f4937111f7","Type":"ContainerDied","Data":"df6290499d34c46bcce9be0a770a6105880b511dd603638463d2d0467a94f87f"} Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.525316 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.586008 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-scripts\") pod \"28b31342-1634-47b6-ac0e-a6f4937111f7\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.586370 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lx79\" (UniqueName: \"kubernetes.io/projected/28b31342-1634-47b6-ac0e-a6f4937111f7-kube-api-access-2lx79\") pod \"28b31342-1634-47b6-ac0e-a6f4937111f7\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.587562 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-config-data\") pod \"28b31342-1634-47b6-ac0e-a6f4937111f7\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.587711 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-combined-ca-bundle\") pod \"28b31342-1634-47b6-ac0e-a6f4937111f7\" (UID: \"28b31342-1634-47b6-ac0e-a6f4937111f7\") " Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.592489 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b31342-1634-47b6-ac0e-a6f4937111f7-kube-api-access-2lx79" (OuterVolumeSpecName: "kube-api-access-2lx79") pod "28b31342-1634-47b6-ac0e-a6f4937111f7" (UID: "28b31342-1634-47b6-ac0e-a6f4937111f7"). InnerVolumeSpecName "kube-api-access-2lx79". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.594511 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-scripts" (OuterVolumeSpecName: "scripts") pod "28b31342-1634-47b6-ac0e-a6f4937111f7" (UID: "28b31342-1634-47b6-ac0e-a6f4937111f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.614592 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28b31342-1634-47b6-ac0e-a6f4937111f7" (UID: "28b31342-1634-47b6-ac0e-a6f4937111f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.618917 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-config-data" (OuterVolumeSpecName: "config-data") pod "28b31342-1634-47b6-ac0e-a6f4937111f7" (UID: "28b31342-1634-47b6-ac0e-a6f4937111f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.690107 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lx79\" (UniqueName: \"kubernetes.io/projected/28b31342-1634-47b6-ac0e-a6f4937111f7-kube-api-access-2lx79\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.690146 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.690157 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:16 crc kubenswrapper[4888]: I1006 15:20:16.690167 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28b31342-1634-47b6-ac0e-a6f4937111f7-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.250171 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-snfqf" event={"ID":"28b31342-1634-47b6-ac0e-a6f4937111f7","Type":"ContainerDied","Data":"79f3e961c09564530ec7e14154439c8aa26178667759f9ff8dd5e0c35dbb393f"} Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.250233 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f3e961c09564530ec7e14154439c8aa26178667759f9ff8dd5e0c35dbb393f" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.250318 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-snfqf" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.397738 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 15:20:17 crc kubenswrapper[4888]: E1006 15:20:17.398104 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c1c65d-3a97-48cc-8377-cbc4cb23ddac" containerName="mariadb-account-create" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.398119 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c1c65d-3a97-48cc-8377-cbc4cb23ddac" containerName="mariadb-account-create" Oct 06 15:20:17 crc kubenswrapper[4888]: E1006 15:20:17.398134 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b31342-1634-47b6-ac0e-a6f4937111f7" containerName="nova-cell0-conductor-db-sync" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.398140 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b31342-1634-47b6-ac0e-a6f4937111f7" containerName="nova-cell0-conductor-db-sync" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.398315 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c1c65d-3a97-48cc-8377-cbc4cb23ddac" containerName="mariadb-account-create" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.398334 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b31342-1634-47b6-ac0e-a6f4937111f7" containerName="nova-cell0-conductor-db-sync" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.398914 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.401260 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-72h2t" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.409810 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.415211 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.503663 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmfsd\" (UniqueName: \"kubernetes.io/projected/bf04c24d-a750-46b9-8b0a-26cb500c2494-kube-api-access-cmfsd\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.503760 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf04c24d-a750-46b9-8b0a-26cb500c2494-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.504064 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf04c24d-a750-46b9-8b0a-26cb500c2494-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.605666 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf04c24d-a750-46b9-8b0a-26cb500c2494-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.605777 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmfsd\" (UniqueName: \"kubernetes.io/projected/bf04c24d-a750-46b9-8b0a-26cb500c2494-kube-api-access-cmfsd\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.605893 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf04c24d-a750-46b9-8b0a-26cb500c2494-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.611042 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf04c24d-a750-46b9-8b0a-26cb500c2494-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.611347 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf04c24d-a750-46b9-8b0a-26cb500c2494-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.622552 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmfsd\" (UniqueName: \"kubernetes.io/projected/bf04c24d-a750-46b9-8b0a-26cb500c2494-kube-api-access-cmfsd\") pod \"nova-cell0-conductor-0\" (UID: \"bf04c24d-a750-46b9-8b0a-26cb500c2494\") " pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:17 crc kubenswrapper[4888]: I1006 15:20:17.716304 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:18 crc kubenswrapper[4888]: I1006 15:20:18.145727 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 15:20:18 crc kubenswrapper[4888]: W1006 15:20:18.156873 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf04c24d_a750_46b9_8b0a_26cb500c2494.slice/crio-d6fa8203ddb6b1bf549e39d6d48920480d7eef830a833c73082fc60d73778dda WatchSource:0}: Error finding container d6fa8203ddb6b1bf549e39d6d48920480d7eef830a833c73082fc60d73778dda: Status 404 returned error can't find the container with id d6fa8203ddb6b1bf549e39d6d48920480d7eef830a833c73082fc60d73778dda Oct 06 15:20:18 crc kubenswrapper[4888]: I1006 15:20:18.260094 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf04c24d-a750-46b9-8b0a-26cb500c2494","Type":"ContainerStarted","Data":"d6fa8203ddb6b1bf549e39d6d48920480d7eef830a833c73082fc60d73778dda"} Oct 06 15:20:19 crc kubenswrapper[4888]: I1006 15:20:19.281159 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bf04c24d-a750-46b9-8b0a-26cb500c2494","Type":"ContainerStarted","Data":"450cbb90a9d623271fef9124618b512cb5faa9291ee0bd0d7030328ed99b6b35"} Oct 06 15:20:19 crc kubenswrapper[4888]: I1006 15:20:19.281654 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:19 crc kubenswrapper[4888]: I1006 15:20:19.309877 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.309857338 podStartE2EDuration="2.309857338s" podCreationTimestamp="2025-10-06 15:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:19.301181734 +0000 UTC m=+1159.113532462" watchObservedRunningTime="2025-10-06 15:20:19.309857338 +0000 UTC m=+1159.122208056" Oct 06 15:20:27 crc kubenswrapper[4888]: I1006 15:20:27.746869 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.297260 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-s4mwc"] Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.298782 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.302368 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.302570 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.314850 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s4mwc"] Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.411600 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk728\" (UniqueName: \"kubernetes.io/projected/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-kube-api-access-vk728\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.411665 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-scripts\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.411741 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.412017 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-config-data\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.515761 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk728\" (UniqueName: \"kubernetes.io/projected/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-kube-api-access-vk728\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.515846 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-scripts\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.515866 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.515954 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-config-data\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.524148 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-config-data\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.542184 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.559291 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-scripts\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.570532 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.574604 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.587383 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.599062 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk728\" (UniqueName: \"kubernetes.io/projected/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-kube-api-access-vk728\") pod \"nova-cell0-cell-mapping-s4mwc\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.617708 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.618342 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-config-data\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.618399 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs68x\" (UniqueName: \"kubernetes.io/projected/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-kube-api-access-bs68x\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.618488 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.618513 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-logs\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.637201 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.697908 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.699841 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.718182 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.719874 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-config-data\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.719931 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs68x\" (UniqueName: \"kubernetes.io/projected/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-kube-api-access-bs68x\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.720173 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.720205 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-logs\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.732164 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-logs\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.740106 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.754182 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-config-data\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.786612 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs68x\" (UniqueName: \"kubernetes.io/projected/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-kube-api-access-bs68x\") pod \"nova-api-0\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.811155 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.833396 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxd6\" (UniqueName: \"kubernetes.io/projected/3203fee2-fb4f-455d-8919-029705d3b1df-kube-api-access-wzxd6\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.833481 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-config-data\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.833570 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.845389 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.885241 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.886741 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.899075 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.949118 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxd6\" (UniqueName: \"kubernetes.io/projected/3203fee2-fb4f-455d-8919-029705d3b1df-kube-api-access-wzxd6\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.953201 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-config-data\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.953380 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.956607 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.975359 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-config-data\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.975980 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:28 crc kubenswrapper[4888]: I1006 15:20:28.995615 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxd6\" (UniqueName: \"kubernetes.io/projected/3203fee2-fb4f-455d-8919-029705d3b1df-kube-api-access-wzxd6\") pod \"nova-scheduler-0\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.021033 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.022892 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.031369 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.042879 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.055522 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5lz\" (UniqueName: \"kubernetes.io/projected/b4ef5f28-b230-49fd-9858-afe474e4cebe-kube-api-access-zl5lz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.055597 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.055674 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.065398 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-qn77m"] Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.067406 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.089340 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-qn77m"] Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.143495 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.160017 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5lz\" (UniqueName: \"kubernetes.io/projected/b4ef5f28-b230-49fd-9858-afe474e4cebe-kube-api-access-zl5lz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.160089 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-config-data\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.160128 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.160189 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.160234 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf5wr\" (UniqueName: \"kubernetes.io/projected/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-kube-api-access-jf5wr\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.160306 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-logs\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.160344 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.166292 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.193989 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5lz\" (UniqueName: \"kubernetes.io/projected/b4ef5f28-b230-49fd-9858-afe474e4cebe-kube-api-access-zl5lz\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.218502 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.251285 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.275398 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcwhx\" (UniqueName: \"kubernetes.io/projected/4b79ad14-c215-418e-a2d9-a052e1f585bf-kube-api-access-zcwhx\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.275509 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.275560 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf5wr\" (UniqueName: \"kubernetes.io/projected/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-kube-api-access-jf5wr\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.275597 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.275672 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-config\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.275716 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.275743 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-logs\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.275883 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.276483 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-config-data\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.276555 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.277086 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-logs\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.289610 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.290690 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-config-data\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.306157 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf5wr\" (UniqueName: \"kubernetes.io/projected/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-kube-api-access-jf5wr\") pod \"nova-metadata-0\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.394493 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcwhx\" (UniqueName: \"kubernetes.io/projected/4b79ad14-c215-418e-a2d9-a052e1f585bf-kube-api-access-zcwhx\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.394559 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.394593 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.394650 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-config\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.394675 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.394743 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.397210 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.408520 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.409360 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.409877 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.409376 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-config\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.410406 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.522243 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcwhx\" (UniqueName: \"kubernetes.io/projected/4b79ad14-c215-418e-a2d9-a052e1f585bf-kube-api-access-zcwhx\") pod \"dnsmasq-dns-845d6d6f59-qn77m\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:29 crc kubenswrapper[4888]: I1006 15:20:29.729291 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.013643 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s4mwc"] Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.268387 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.334666 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.344824 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:20:30 crc kubenswrapper[4888]: W1006 15:20:30.346934 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4ef5f28_b230_49fd_9858_afe474e4cebe.slice/crio-5b6647287b38802acd912000f2de2e33c0895a8d948cec353b67eb8627ce5519 WatchSource:0}: Error finding container 5b6647287b38802acd912000f2de2e33c0895a8d948cec353b67eb8627ce5519: Status 404 returned error can't find the container with id 5b6647287b38802acd912000f2de2e33c0895a8d948cec353b67eb8627ce5519 Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.487638 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.489921 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s4mwc" event={"ID":"8e63fa23-500a-4bfa-9231-e4a6e0d7615d","Type":"ContainerStarted","Data":"285ed94dab6f74db9e51accb3da0611b6ae076a173378bf2282347b7aa3787a6"} Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.489957 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s4mwc" event={"ID":"8e63fa23-500a-4bfa-9231-e4a6e0d7615d","Type":"ContainerStarted","Data":"d117fd3907de95cc3eaaf49d7fb5958eb743a30bd4b2922ec1912edc647372e5"} Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.496660 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3203fee2-fb4f-455d-8919-029705d3b1df","Type":"ContainerStarted","Data":"ba9c20f771cbe5b8db8c6452355bb7595b4b53c076fe169b3b688748dbc507b7"} Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.506581 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aae63d96-7434-4e46-93b5-2dbd3d2afdb6","Type":"ContainerStarted","Data":"ad8d0794265e3be3f4fa265b7714214c1e6d3bd7e80cc36f4c47cb14198d076b"} Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.522986 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4ef5f28-b230-49fd-9858-afe474e4cebe","Type":"ContainerStarted","Data":"5b6647287b38802acd912000f2de2e33c0895a8d948cec353b67eb8627ce5519"} Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.527625 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-s4mwc" podStartSLOduration=2.527601894 podStartE2EDuration="2.527601894s" podCreationTimestamp="2025-10-06 15:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:30.517926239 +0000 UTC m=+1170.330276977" watchObservedRunningTime="2025-10-06 15:20:30.527601894 +0000 UTC m=+1170.339952612" Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.650512 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-qn77m"] Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.816776 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6727f"] Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.818246 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.823980 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.827224 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.843013 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6727f"] Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.984284 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-config-data\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.984344 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-scripts\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.984477 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:30 crc kubenswrapper[4888]: I1006 15:20:30.984504 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdtkg\" (UniqueName: \"kubernetes.io/projected/49b4955a-ce90-41d6-a9be-1b46072c3ab1-kube-api-access-fdtkg\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.087127 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.087174 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdtkg\" (UniqueName: \"kubernetes.io/projected/49b4955a-ce90-41d6-a9be-1b46072c3ab1-kube-api-access-fdtkg\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.087285 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-config-data\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.087311 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-scripts\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.094424 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.099269 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-scripts\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.116221 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdtkg\" (UniqueName: \"kubernetes.io/projected/49b4955a-ce90-41d6-a9be-1b46072c3ab1-kube-api-access-fdtkg\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.116493 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-config-data\") pod \"nova-cell1-conductor-db-sync-6727f\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.170346 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.575414 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cde3fe7-37dc-4fd0-a663-d55c8965ed79","Type":"ContainerStarted","Data":"e1b6343516266f0cc2a56ffe7cef605ced15dabf5a5a9e4d6dc5222ff9f78551"} Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.626414 4888 generic.go:334] "Generic (PLEG): container finished" podID="4b79ad14-c215-418e-a2d9-a052e1f585bf" containerID="37ed19fe8f63d96fa6ccdd0ee71e1086e5c19abd84dcce0014dbb475c3dca2f1" exitCode=0 Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.627892 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" event={"ID":"4b79ad14-c215-418e-a2d9-a052e1f585bf","Type":"ContainerDied","Data":"37ed19fe8f63d96fa6ccdd0ee71e1086e5c19abd84dcce0014dbb475c3dca2f1"} Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.627927 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" event={"ID":"4b79ad14-c215-418e-a2d9-a052e1f585bf","Type":"ContainerStarted","Data":"5cd40f7a9a386b312af2b274eea9a09ae1831037ed4a8027a7665d50d8c57ded"} Oct 06 15:20:31 crc kubenswrapper[4888]: I1006 15:20:31.756997 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6727f"] Oct 06 15:20:31 crc kubenswrapper[4888]: W1006 15:20:31.789762 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49b4955a_ce90_41d6_a9be_1b46072c3ab1.slice/crio-7197fde4c2108c7789ac82222cd5eaa6e1403a1327308df945bf45c38837d73b WatchSource:0}: Error finding container 7197fde4c2108c7789ac82222cd5eaa6e1403a1327308df945bf45c38837d73b: Status 404 returned error can't find the container with id 7197fde4c2108c7789ac82222cd5eaa6e1403a1327308df945bf45c38837d73b Oct 06 15:20:32 crc kubenswrapper[4888]: I1006 15:20:32.653619 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" event={"ID":"4b79ad14-c215-418e-a2d9-a052e1f585bf","Type":"ContainerStarted","Data":"f24cfbf4a7a581e34d608bfd770e121f2afcc8f65a38adc9d6db74680f429c58"} Oct 06 15:20:32 crc kubenswrapper[4888]: I1006 15:20:32.655188 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:32 crc kubenswrapper[4888]: I1006 15:20:32.659072 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6727f" event={"ID":"49b4955a-ce90-41d6-a9be-1b46072c3ab1","Type":"ContainerStarted","Data":"133f018ca8b767a426199661cf8166e6ae225a00f9ac12524fbe84b53ad63d71"} Oct 06 15:20:32 crc kubenswrapper[4888]: I1006 15:20:32.659117 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6727f" event={"ID":"49b4955a-ce90-41d6-a9be-1b46072c3ab1","Type":"ContainerStarted","Data":"7197fde4c2108c7789ac82222cd5eaa6e1403a1327308df945bf45c38837d73b"} Oct 06 15:20:32 crc kubenswrapper[4888]: I1006 15:20:32.701780 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" podStartSLOduration=4.701764874 podStartE2EDuration="4.701764874s" podCreationTimestamp="2025-10-06 15:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:32.698635685 +0000 UTC m=+1172.510986403" watchObservedRunningTime="2025-10-06 15:20:32.701764874 +0000 UTC m=+1172.514115592" Oct 06 15:20:32 crc kubenswrapper[4888]: I1006 15:20:32.723924 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-6727f" podStartSLOduration=2.723903152 podStartE2EDuration="2.723903152s" podCreationTimestamp="2025-10-06 15:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:32.720082452 +0000 UTC m=+1172.532433170" watchObservedRunningTime="2025-10-06 15:20:32.723903152 +0000 UTC m=+1172.536253870" Oct 06 15:20:33 crc kubenswrapper[4888]: I1006 15:20:33.288862 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:33 crc kubenswrapper[4888]: I1006 15:20:33.318773 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:20:34 crc kubenswrapper[4888]: I1006 15:20:34.552736 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 15:20:35 crc kubenswrapper[4888]: I1006 15:20:35.704208 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cde3fe7-37dc-4fd0-a663-d55c8965ed79","Type":"ContainerStarted","Data":"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910"} Oct 06 15:20:35 crc kubenswrapper[4888]: I1006 15:20:35.707634 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aae63d96-7434-4e46-93b5-2dbd3d2afdb6","Type":"ContainerStarted","Data":"e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163"} Oct 06 15:20:35 crc kubenswrapper[4888]: I1006 15:20:35.709120 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4ef5f28-b230-49fd-9858-afe474e4cebe","Type":"ContainerStarted","Data":"5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2"} Oct 06 15:20:35 crc kubenswrapper[4888]: I1006 15:20:35.709279 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b4ef5f28-b230-49fd-9858-afe474e4cebe" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2" gracePeriod=30 Oct 06 15:20:35 crc kubenswrapper[4888]: I1006 15:20:35.714348 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3203fee2-fb4f-455d-8919-029705d3b1df","Type":"ContainerStarted","Data":"be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134"} Oct 06 15:20:35 crc kubenswrapper[4888]: I1006 15:20:35.747008 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.028060431 podStartE2EDuration="7.74699212s" podCreationTimestamp="2025-10-06 15:20:28 +0000 UTC" firstStartedPulling="2025-10-06 15:20:30.39084096 +0000 UTC m=+1170.203191678" lastFinishedPulling="2025-10-06 15:20:35.109772649 +0000 UTC m=+1174.922123367" observedRunningTime="2025-10-06 15:20:35.745955697 +0000 UTC m=+1175.558306415" watchObservedRunningTime="2025-10-06 15:20:35.74699212 +0000 UTC m=+1175.559342828" Oct 06 15:20:36 crc kubenswrapper[4888]: I1006 15:20:36.728887 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cde3fe7-37dc-4fd0-a663-d55c8965ed79","Type":"ContainerStarted","Data":"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8"} Oct 06 15:20:36 crc kubenswrapper[4888]: I1006 15:20:36.733582 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aae63d96-7434-4e46-93b5-2dbd3d2afdb6","Type":"ContainerStarted","Data":"8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10"} Oct 06 15:20:36 crc kubenswrapper[4888]: I1006 15:20:36.728965 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerName="nova-metadata-log" containerID="cri-o://96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910" gracePeriod=30 Oct 06 15:20:36 crc kubenswrapper[4888]: I1006 15:20:36.729010 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerName="nova-metadata-metadata" containerID="cri-o://a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8" gracePeriod=30 Oct 06 15:20:36 crc kubenswrapper[4888]: I1006 15:20:36.762830 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.142012197 podStartE2EDuration="8.76279007s" podCreationTimestamp="2025-10-06 15:20:28 +0000 UTC" firstStartedPulling="2025-10-06 15:20:30.48878884 +0000 UTC m=+1170.301139568" lastFinishedPulling="2025-10-06 15:20:35.109566723 +0000 UTC m=+1174.921917441" observedRunningTime="2025-10-06 15:20:36.75513875 +0000 UTC m=+1176.567489468" watchObservedRunningTime="2025-10-06 15:20:36.76279007 +0000 UTC m=+1176.575140788" Oct 06 15:20:36 crc kubenswrapper[4888]: I1006 15:20:36.776402 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.9250697629999998 podStartE2EDuration="8.776372728s" podCreationTimestamp="2025-10-06 15:20:28 +0000 UTC" firstStartedPulling="2025-10-06 15:20:30.257723781 +0000 UTC m=+1170.070074499" lastFinishedPulling="2025-10-06 15:20:35.109026746 +0000 UTC m=+1174.921377464" observedRunningTime="2025-10-06 15:20:35.783257923 +0000 UTC m=+1175.595608641" watchObservedRunningTime="2025-10-06 15:20:36.776372728 +0000 UTC m=+1176.588723446" Oct 06 15:20:36 crc kubenswrapper[4888]: I1006 15:20:36.780810 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.002080815 podStartE2EDuration="8.780767578s" podCreationTimestamp="2025-10-06 15:20:28 +0000 UTC" firstStartedPulling="2025-10-06 15:20:30.335529646 +0000 UTC m=+1170.147880364" lastFinishedPulling="2025-10-06 15:20:35.114216409 +0000 UTC m=+1174.926567127" observedRunningTime="2025-10-06 15:20:36.780166759 +0000 UTC m=+1176.592517497" watchObservedRunningTime="2025-10-06 15:20:36.780767578 +0000 UTC m=+1176.593118296" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.425090 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.549253 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-logs\") pod \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.549416 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-combined-ca-bundle\") pod \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.549451 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf5wr\" (UniqueName: \"kubernetes.io/projected/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-kube-api-access-jf5wr\") pod \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.549585 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-config-data\") pod \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\" (UID: \"3cde3fe7-37dc-4fd0-a663-d55c8965ed79\") " Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.549583 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-logs" (OuterVolumeSpecName: "logs") pod "3cde3fe7-37dc-4fd0-a663-d55c8965ed79" (UID: "3cde3fe7-37dc-4fd0-a663-d55c8965ed79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.550088 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.558934 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-kube-api-access-jf5wr" (OuterVolumeSpecName: "kube-api-access-jf5wr") pod "3cde3fe7-37dc-4fd0-a663-d55c8965ed79" (UID: "3cde3fe7-37dc-4fd0-a663-d55c8965ed79"). InnerVolumeSpecName "kube-api-access-jf5wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.583533 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cde3fe7-37dc-4fd0-a663-d55c8965ed79" (UID: "3cde3fe7-37dc-4fd0-a663-d55c8965ed79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.584142 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-config-data" (OuterVolumeSpecName: "config-data") pod "3cde3fe7-37dc-4fd0-a663-d55c8965ed79" (UID: "3cde3fe7-37dc-4fd0-a663-d55c8965ed79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.651320 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.651357 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf5wr\" (UniqueName: \"kubernetes.io/projected/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-kube-api-access-jf5wr\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.651369 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cde3fe7-37dc-4fd0-a663-d55c8965ed79-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.743380 4888 generic.go:334] "Generic (PLEG): container finished" podID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerID="a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8" exitCode=0 Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.743406 4888 generic.go:334] "Generic (PLEG): container finished" podID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerID="96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910" exitCode=143 Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.743532 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.743526 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cde3fe7-37dc-4fd0-a663-d55c8965ed79","Type":"ContainerDied","Data":"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8"} Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.743714 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cde3fe7-37dc-4fd0-a663-d55c8965ed79","Type":"ContainerDied","Data":"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910"} Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.743730 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3cde3fe7-37dc-4fd0-a663-d55c8965ed79","Type":"ContainerDied","Data":"e1b6343516266f0cc2a56ffe7cef605ced15dabf5a5a9e4d6dc5222ff9f78551"} Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.743747 4888 scope.go:117] "RemoveContainer" containerID="a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.784852 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.795174 4888 scope.go:117] "RemoveContainer" containerID="96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.798694 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.819630 4888 scope.go:117] "RemoveContainer" containerID="a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.820939 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:37 crc kubenswrapper[4888]: E1006 15:20:37.821285 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerName="nova-metadata-log" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.821299 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerName="nova-metadata-log" Oct 06 15:20:37 crc kubenswrapper[4888]: E1006 15:20:37.821315 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerName="nova-metadata-metadata" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.821322 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerName="nova-metadata-metadata" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.821551 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerName="nova-metadata-log" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.821579 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" containerName="nova-metadata-metadata" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.822526 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:37 crc kubenswrapper[4888]: E1006 15:20:37.825109 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8\": container with ID starting with a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8 not found: ID does not exist" containerID="a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.825168 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8"} err="failed to get container status \"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8\": rpc error: code = NotFound desc = could not find container \"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8\": container with ID starting with a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8 not found: ID does not exist" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.825202 4888 scope.go:117] "RemoveContainer" containerID="96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.825477 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.826019 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 15:20:37 crc kubenswrapper[4888]: E1006 15:20:37.826655 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910\": container with ID starting with 96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910 not found: ID does not exist" containerID="96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.826708 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910"} err="failed to get container status \"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910\": rpc error: code = NotFound desc = could not find container \"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910\": container with ID starting with 96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910 not found: ID does not exist" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.826739 4888 scope.go:117] "RemoveContainer" containerID="a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.827148 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8"} err="failed to get container status \"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8\": rpc error: code = NotFound desc = could not find container \"a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8\": container with ID starting with a9b5b7360566c911e09d3a512e022360cb5881f394f22c48665ab939bfa912b8 not found: ID does not exist" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.827189 4888 scope.go:117] "RemoveContainer" containerID="96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.827501 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910"} err="failed to get container status \"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910\": rpc error: code = NotFound desc = could not find container \"96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910\": container with ID starting with 96744ce22c792211ea11c05ee2a1b09ff1500f62b05d945abaf17ca960261910 not found: ID does not exist" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.847078 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.956958 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-config-data\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.957027 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-logs\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.957088 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.957270 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9vd5\" (UniqueName: \"kubernetes.io/projected/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-kube-api-access-h9vd5\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:37 crc kubenswrapper[4888]: I1006 15:20:37.957591 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.059744 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-config-data\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.059910 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-logs\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.059947 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.059976 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9vd5\" (UniqueName: \"kubernetes.io/projected/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-kube-api-access-h9vd5\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.060051 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.061325 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-logs\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.066713 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-config-data\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.067347 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.076162 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.088728 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9vd5\" (UniqueName: \"kubernetes.io/projected/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-kube-api-access-h9vd5\") pod \"nova-metadata-0\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.149591 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.741274 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:38 crc kubenswrapper[4888]: W1006 15:20:38.769675 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e76cb5c_4c75_43f9_9f66_95fb6e77c065.slice/crio-1b37dfbf24fa7ba18ba726748badca4dc17a01c7cad972951cc7cc64bd1ece13 WatchSource:0}: Error finding container 1b37dfbf24fa7ba18ba726748badca4dc17a01c7cad972951cc7cc64bd1ece13: Status 404 returned error can't find the container with id 1b37dfbf24fa7ba18ba726748badca4dc17a01c7cad972951cc7cc64bd1ece13 Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.813387 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.813424 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 15:20:38 crc kubenswrapper[4888]: I1006 15:20:38.935021 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cde3fe7-37dc-4fd0-a663-d55c8965ed79" path="/var/lib/kubelet/pods/3cde3fe7-37dc-4fd0-a663-d55c8965ed79/volumes" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.146906 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.148159 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.222443 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.253281 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.731966 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.779482 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6e76cb5c-4c75-43f9-9f66-95fb6e77c065","Type":"ContainerStarted","Data":"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830"} Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.779628 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6e76cb5c-4c75-43f9-9f66-95fb6e77c065","Type":"ContainerStarted","Data":"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17"} Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.779650 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6e76cb5c-4c75-43f9-9f66-95fb6e77c065","Type":"ContainerStarted","Data":"1b37dfbf24fa7ba18ba726748badca4dc17a01c7cad972951cc7cc64bd1ece13"} Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.881520 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-r4vsc"] Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.881853 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" podUID="e50b443c-75ad-4c36-871e-6d486f99d547" containerName="dnsmasq-dns" containerID="cri-o://942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862" gracePeriod=10 Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.882787 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.897703 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.898226 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 15:20:39 crc kubenswrapper[4888]: I1006 15:20:39.917380 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.917357315 podStartE2EDuration="2.917357315s" podCreationTimestamp="2025-10-06 15:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:39.844479616 +0000 UTC m=+1179.656830334" watchObservedRunningTime="2025-10-06 15:20:39.917357315 +0000 UTC m=+1179.729708033" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.531636 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.620138 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-swift-storage-0\") pod \"e50b443c-75ad-4c36-871e-6d486f99d547\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.620197 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-nb\") pod \"e50b443c-75ad-4c36-871e-6d486f99d547\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.620323 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-config\") pod \"e50b443c-75ad-4c36-871e-6d486f99d547\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.620380 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-sb\") pod \"e50b443c-75ad-4c36-871e-6d486f99d547\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.620549 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-svc\") pod \"e50b443c-75ad-4c36-871e-6d486f99d547\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.620612 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd2cm\" (UniqueName: \"kubernetes.io/projected/e50b443c-75ad-4c36-871e-6d486f99d547-kube-api-access-kd2cm\") pod \"e50b443c-75ad-4c36-871e-6d486f99d547\" (UID: \"e50b443c-75ad-4c36-871e-6d486f99d547\") " Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.627286 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50b443c-75ad-4c36-871e-6d486f99d547-kube-api-access-kd2cm" (OuterVolumeSpecName: "kube-api-access-kd2cm") pod "e50b443c-75ad-4c36-871e-6d486f99d547" (UID: "e50b443c-75ad-4c36-871e-6d486f99d547"). InnerVolumeSpecName "kube-api-access-kd2cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.726635 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd2cm\" (UniqueName: \"kubernetes.io/projected/e50b443c-75ad-4c36-871e-6d486f99d547-kube-api-access-kd2cm\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.748952 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-config" (OuterVolumeSpecName: "config") pod "e50b443c-75ad-4c36-871e-6d486f99d547" (UID: "e50b443c-75ad-4c36-871e-6d486f99d547"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.749124 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e50b443c-75ad-4c36-871e-6d486f99d547" (UID: "e50b443c-75ad-4c36-871e-6d486f99d547"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.769659 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e50b443c-75ad-4c36-871e-6d486f99d547" (UID: "e50b443c-75ad-4c36-871e-6d486f99d547"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.776273 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e50b443c-75ad-4c36-871e-6d486f99d547" (UID: "e50b443c-75ad-4c36-871e-6d486f99d547"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.807180 4888 generic.go:334] "Generic (PLEG): container finished" podID="e50b443c-75ad-4c36-871e-6d486f99d547" containerID="942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862" exitCode=0 Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.807760 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e50b443c-75ad-4c36-871e-6d486f99d547" (UID: "e50b443c-75ad-4c36-871e-6d486f99d547"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.807901 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.808232 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" event={"ID":"e50b443c-75ad-4c36-871e-6d486f99d547","Type":"ContainerDied","Data":"942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862"} Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.808269 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-r4vsc" event={"ID":"e50b443c-75ad-4c36-871e-6d486f99d547","Type":"ContainerDied","Data":"63d9da943a196643cca0b491b0f6c825ec35ce0d3276fb5c7d7f4153dbdf0440"} Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.808289 4888 scope.go:117] "RemoveContainer" containerID="942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.828130 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.828440 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.828543 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.828622 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.828703 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50b443c-75ad-4c36-871e-6d486f99d547-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.856944 4888 scope.go:117] "RemoveContainer" containerID="3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.863901 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-r4vsc"] Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.888310 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-r4vsc"] Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.907850 4888 scope.go:117] "RemoveContainer" containerID="942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862" Oct 06 15:20:40 crc kubenswrapper[4888]: E1006 15:20:40.910747 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862\": container with ID starting with 942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862 not found: ID does not exist" containerID="942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.910790 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862"} err="failed to get container status \"942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862\": rpc error: code = NotFound desc = could not find container \"942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862\": container with ID starting with 942415414a170c028c0608529367dd3d19f14a0ecc9653f2bff66d6720abc862 not found: ID does not exist" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.910887 4888 scope.go:117] "RemoveContainer" containerID="3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66" Oct 06 15:20:40 crc kubenswrapper[4888]: E1006 15:20:40.913656 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66\": container with ID starting with 3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66 not found: ID does not exist" containerID="3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.913699 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66"} err="failed to get container status \"3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66\": rpc error: code = NotFound desc = could not find container \"3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66\": container with ID starting with 3fc72f5910feda18f54b3fd907167ac5fde05cb87351eb930e960a1da4b63f66 not found: ID does not exist" Oct 06 15:20:40 crc kubenswrapper[4888]: I1006 15:20:40.938633 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e50b443c-75ad-4c36-871e-6d486f99d547" path="/var/lib/kubelet/pods/e50b443c-75ad-4c36-871e-6d486f99d547/volumes" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.031233 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.031449 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6a9289a6-f236-4e76-ac2b-4ef38163f845" containerName="kube-state-metrics" containerID="cri-o://518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f" gracePeriod=30 Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.592549 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.652029 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2dxh\" (UniqueName: \"kubernetes.io/projected/6a9289a6-f236-4e76-ac2b-4ef38163f845-kube-api-access-m2dxh\") pod \"6a9289a6-f236-4e76-ac2b-4ef38163f845\" (UID: \"6a9289a6-f236-4e76-ac2b-4ef38163f845\") " Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.664713 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a9289a6-f236-4e76-ac2b-4ef38163f845-kube-api-access-m2dxh" (OuterVolumeSpecName: "kube-api-access-m2dxh") pod "6a9289a6-f236-4e76-ac2b-4ef38163f845" (UID: "6a9289a6-f236-4e76-ac2b-4ef38163f845"). InnerVolumeSpecName "kube-api-access-m2dxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.754610 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2dxh\" (UniqueName: \"kubernetes.io/projected/6a9289a6-f236-4e76-ac2b-4ef38163f845-kube-api-access-m2dxh\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.821363 4888 generic.go:334] "Generic (PLEG): container finished" podID="6a9289a6-f236-4e76-ac2b-4ef38163f845" containerID="518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f" exitCode=2 Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.823017 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.823637 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6a9289a6-f236-4e76-ac2b-4ef38163f845","Type":"ContainerDied","Data":"518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f"} Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.823693 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6a9289a6-f236-4e76-ac2b-4ef38163f845","Type":"ContainerDied","Data":"a1d104ac1333ab12e8cfcd11d609f1c8a196c5ef8f75bb089719b727abd5ebee"} Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.823714 4888 scope.go:117] "RemoveContainer" containerID="518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.878538 4888 scope.go:117] "RemoveContainer" containerID="518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f" Oct 06 15:20:41 crc kubenswrapper[4888]: E1006 15:20:41.882664 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f\": container with ID starting with 518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f not found: ID does not exist" containerID="518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.882723 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f"} err="failed to get container status \"518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f\": rpc error: code = NotFound desc = could not find container \"518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f\": container with ID starting with 518111e07b9f7c605736d02c91164c93f9d6ca96bacc720eff0cd4003f423d8f not found: ID does not exist" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.904159 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.917323 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.972110 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:20:41 crc kubenswrapper[4888]: E1006 15:20:41.972900 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50b443c-75ad-4c36-871e-6d486f99d547" containerName="init" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.972915 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50b443c-75ad-4c36-871e-6d486f99d547" containerName="init" Oct 06 15:20:41 crc kubenswrapper[4888]: E1006 15:20:41.972950 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a9289a6-f236-4e76-ac2b-4ef38163f845" containerName="kube-state-metrics" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.972962 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a9289a6-f236-4e76-ac2b-4ef38163f845" containerName="kube-state-metrics" Oct 06 15:20:41 crc kubenswrapper[4888]: E1006 15:20:41.972996 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50b443c-75ad-4c36-871e-6d486f99d547" containerName="dnsmasq-dns" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.973009 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50b443c-75ad-4c36-871e-6d486f99d547" containerName="dnsmasq-dns" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.973420 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a9289a6-f236-4e76-ac2b-4ef38163f845" containerName="kube-state-metrics" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.973468 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50b443c-75ad-4c36-871e-6d486f99d547" containerName="dnsmasq-dns" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.976414 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.982185 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.983885 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 06 15:20:41 crc kubenswrapper[4888]: I1006 15:20:41.997095 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.066191 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkqk\" (UniqueName: \"kubernetes.io/projected/0cb31d67-e313-48ce-8230-513d88d01445-kube-api-access-zfkqk\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.066268 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.066394 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.066458 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.168397 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfkqk\" (UniqueName: \"kubernetes.io/projected/0cb31d67-e313-48ce-8230-513d88d01445-kube-api-access-zfkqk\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.168458 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.168574 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.168624 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.174265 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.180931 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.188829 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb31d67-e313-48ce-8230-513d88d01445-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.202054 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfkqk\" (UniqueName: \"kubernetes.io/projected/0cb31d67-e313-48ce-8230-513d88d01445-kube-api-access-zfkqk\") pod \"kube-state-metrics-0\" (UID: \"0cb31d67-e313-48ce-8230-513d88d01445\") " pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.316611 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.777603 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.779839 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.830611 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0cb31d67-e313-48ce-8230-513d88d01445","Type":"ContainerStarted","Data":"aa4d774114dcbf57334fb7e76df541d9996f95b777523912fdbd96caf2d690ec"} Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.833829 4888 generic.go:334] "Generic (PLEG): container finished" podID="8e63fa23-500a-4bfa-9231-e4a6e0d7615d" containerID="285ed94dab6f74db9e51accb3da0611b6ae076a173378bf2282347b7aa3787a6" exitCode=0 Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.833860 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s4mwc" event={"ID":"8e63fa23-500a-4bfa-9231-e4a6e0d7615d","Type":"ContainerDied","Data":"285ed94dab6f74db9e51accb3da0611b6ae076a173378bf2282347b7aa3787a6"} Oct 06 15:20:42 crc kubenswrapper[4888]: I1006 15:20:42.937712 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a9289a6-f236-4e76-ac2b-4ef38163f845" path="/var/lib/kubelet/pods/6a9289a6-f236-4e76-ac2b-4ef38163f845/volumes" Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.151473 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.151590 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.558761 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.559443 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="ceilometer-central-agent" containerID="cri-o://07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78" gracePeriod=30 Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.559487 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="proxy-httpd" containerID="cri-o://01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a" gracePeriod=30 Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.559592 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="sg-core" containerID="cri-o://ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4" gracePeriod=30 Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.559652 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="ceilometer-notification-agent" containerID="cri-o://b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c" gracePeriod=30 Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.845109 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0cb31d67-e313-48ce-8230-513d88d01445","Type":"ContainerStarted","Data":"b14bc8196a78ca37305553b4b2e22483e4f1958db1dea27dea4121c323e3b99a"} Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.845218 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.848612 4888 generic.go:334] "Generic (PLEG): container finished" podID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerID="01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a" exitCode=0 Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.848645 4888 generic.go:334] "Generic (PLEG): container finished" podID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerID="ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4" exitCode=2 Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.848711 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerDied","Data":"01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a"} Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.848758 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerDied","Data":"ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4"} Oct 06 15:20:43 crc kubenswrapper[4888]: I1006 15:20:43.871749 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.451087598 podStartE2EDuration="2.871723877s" podCreationTimestamp="2025-10-06 15:20:41 +0000 UTC" firstStartedPulling="2025-10-06 15:20:42.779620559 +0000 UTC m=+1182.591971277" lastFinishedPulling="2025-10-06 15:20:43.200256838 +0000 UTC m=+1183.012607556" observedRunningTime="2025-10-06 15:20:43.867565116 +0000 UTC m=+1183.679915834" watchObservedRunningTime="2025-10-06 15:20:43.871723877 +0000 UTC m=+1183.684074585" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.263657 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.308961 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-combined-ca-bundle\") pod \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.309075 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk728\" (UniqueName: \"kubernetes.io/projected/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-kube-api-access-vk728\") pod \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.309225 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-scripts\") pod \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.309289 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-config-data\") pod \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\" (UID: \"8e63fa23-500a-4bfa-9231-e4a6e0d7615d\") " Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.316935 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-scripts" (OuterVolumeSpecName: "scripts") pod "8e63fa23-500a-4bfa-9231-e4a6e0d7615d" (UID: "8e63fa23-500a-4bfa-9231-e4a6e0d7615d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.321001 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-kube-api-access-vk728" (OuterVolumeSpecName: "kube-api-access-vk728") pod "8e63fa23-500a-4bfa-9231-e4a6e0d7615d" (UID: "8e63fa23-500a-4bfa-9231-e4a6e0d7615d"). InnerVolumeSpecName "kube-api-access-vk728". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.343162 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e63fa23-500a-4bfa-9231-e4a6e0d7615d" (UID: "8e63fa23-500a-4bfa-9231-e4a6e0d7615d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.353483 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-config-data" (OuterVolumeSpecName: "config-data") pod "8e63fa23-500a-4bfa-9231-e4a6e0d7615d" (UID: "8e63fa23-500a-4bfa-9231-e4a6e0d7615d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.411265 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.411308 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.411321 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.411335 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk728\" (UniqueName: \"kubernetes.io/projected/8e63fa23-500a-4bfa-9231-e4a6e0d7615d-kube-api-access-vk728\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.859200 4888 generic.go:334] "Generic (PLEG): container finished" podID="49b4955a-ce90-41d6-a9be-1b46072c3ab1" containerID="133f018ca8b767a426199661cf8166e6ae225a00f9ac12524fbe84b53ad63d71" exitCode=0 Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.859359 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6727f" event={"ID":"49b4955a-ce90-41d6-a9be-1b46072c3ab1","Type":"ContainerDied","Data":"133f018ca8b767a426199661cf8166e6ae225a00f9ac12524fbe84b53ad63d71"} Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.861513 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s4mwc" event={"ID":"8e63fa23-500a-4bfa-9231-e4a6e0d7615d","Type":"ContainerDied","Data":"d117fd3907de95cc3eaaf49d7fb5958eb743a30bd4b2922ec1912edc647372e5"} Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.861547 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d117fd3907de95cc3eaaf49d7fb5958eb743a30bd4b2922ec1912edc647372e5" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.861547 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s4mwc" Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.874259 4888 generic.go:334] "Generic (PLEG): container finished" podID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerID="07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78" exitCode=0 Oct 06 15:20:44 crc kubenswrapper[4888]: I1006 15:20:44.874331 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerDied","Data":"07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78"} Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.025559 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.025882 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-log" containerID="cri-o://e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163" gracePeriod=30 Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.026059 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-api" containerID="cri-o://8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10" gracePeriod=30 Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.050272 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.050531 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3203fee2-fb4f-455d-8919-029705d3b1df" containerName="nova-scheduler-scheduler" containerID="cri-o://be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134" gracePeriod=30 Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.099938 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.100139 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerName="nova-metadata-log" containerID="cri-o://51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17" gracePeriod=30 Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.100440 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerName="nova-metadata-metadata" containerID="cri-o://22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830" gracePeriod=30 Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.602722 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.636260 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-nova-metadata-tls-certs\") pod \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.636342 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9vd5\" (UniqueName: \"kubernetes.io/projected/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-kube-api-access-h9vd5\") pod \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.636456 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-config-data\") pod \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.636483 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-combined-ca-bundle\") pod \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.636532 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-logs\") pod \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\" (UID: \"6e76cb5c-4c75-43f9-9f66-95fb6e77c065\") " Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.638480 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-logs" (OuterVolumeSpecName: "logs") pod "6e76cb5c-4c75-43f9-9f66-95fb6e77c065" (UID: "6e76cb5c-4c75-43f9-9f66-95fb6e77c065"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.673460 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-kube-api-access-h9vd5" (OuterVolumeSpecName: "kube-api-access-h9vd5") pod "6e76cb5c-4c75-43f9-9f66-95fb6e77c065" (UID: "6e76cb5c-4c75-43f9-9f66-95fb6e77c065"). InnerVolumeSpecName "kube-api-access-h9vd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.702944 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e76cb5c-4c75-43f9-9f66-95fb6e77c065" (UID: "6e76cb5c-4c75-43f9-9f66-95fb6e77c065"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.726941 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-config-data" (OuterVolumeSpecName: "config-data") pod "6e76cb5c-4c75-43f9-9f66-95fb6e77c065" (UID: "6e76cb5c-4c75-43f9-9f66-95fb6e77c065"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.744844 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9vd5\" (UniqueName: \"kubernetes.io/projected/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-kube-api-access-h9vd5\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.744879 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.744901 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.744915 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.760040 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6e76cb5c-4c75-43f9-9f66-95fb6e77c065" (UID: "6e76cb5c-4c75-43f9-9f66-95fb6e77c065"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.846778 4888 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e76cb5c-4c75-43f9-9f66-95fb6e77c065-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.885639 4888 generic.go:334] "Generic (PLEG): container finished" podID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerID="e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163" exitCode=143 Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.885704 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aae63d96-7434-4e46-93b5-2dbd3d2afdb6","Type":"ContainerDied","Data":"e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163"} Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.890079 4888 generic.go:334] "Generic (PLEG): container finished" podID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerID="22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830" exitCode=0 Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.890121 4888 generic.go:334] "Generic (PLEG): container finished" podID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerID="51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17" exitCode=143 Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.890175 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.890239 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6e76cb5c-4c75-43f9-9f66-95fb6e77c065","Type":"ContainerDied","Data":"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830"} Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.890277 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6e76cb5c-4c75-43f9-9f66-95fb6e77c065","Type":"ContainerDied","Data":"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17"} Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.890292 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6e76cb5c-4c75-43f9-9f66-95fb6e77c065","Type":"ContainerDied","Data":"1b37dfbf24fa7ba18ba726748badca4dc17a01c7cad972951cc7cc64bd1ece13"} Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.890312 4888 scope.go:117] "RemoveContainer" containerID="22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.927018 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.929073 4888 scope.go:117] "RemoveContainer" containerID="51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.937154 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.961865 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:45 crc kubenswrapper[4888]: E1006 15:20:45.962251 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerName="nova-metadata-metadata" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.962265 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerName="nova-metadata-metadata" Oct 06 15:20:45 crc kubenswrapper[4888]: E1006 15:20:45.962285 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerName="nova-metadata-log" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.962292 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerName="nova-metadata-log" Oct 06 15:20:45 crc kubenswrapper[4888]: E1006 15:20:45.962322 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e63fa23-500a-4bfa-9231-e4a6e0d7615d" containerName="nova-manage" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.962330 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e63fa23-500a-4bfa-9231-e4a6e0d7615d" containerName="nova-manage" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.962572 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerName="nova-metadata-metadata" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.962593 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e63fa23-500a-4bfa-9231-e4a6e0d7615d" containerName="nova-manage" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.962601 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" containerName="nova-metadata-log" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.963536 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.965734 4888 scope.go:117] "RemoveContainer" containerID="22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830" Oct 06 15:20:45 crc kubenswrapper[4888]: E1006 15:20:45.970026 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830\": container with ID starting with 22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830 not found: ID does not exist" containerID="22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.970301 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830"} err="failed to get container status \"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830\": rpc error: code = NotFound desc = could not find container \"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830\": container with ID starting with 22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830 not found: ID does not exist" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.970444 4888 scope.go:117] "RemoveContainer" containerID="51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17" Oct 06 15:20:45 crc kubenswrapper[4888]: E1006 15:20:45.970988 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17\": container with ID starting with 51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17 not found: ID does not exist" containerID="51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.971019 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17"} err="failed to get container status \"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17\": rpc error: code = NotFound desc = could not find container \"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17\": container with ID starting with 51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17 not found: ID does not exist" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.971039 4888 scope.go:117] "RemoveContainer" containerID="22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.971390 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830"} err="failed to get container status \"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830\": rpc error: code = NotFound desc = could not find container \"22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830\": container with ID starting with 22f6a941cd0b3d5fe91476c330f0a02240b402aa46fd97d265f412525515c830 not found: ID does not exist" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.971500 4888 scope.go:117] "RemoveContainer" containerID="51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.971911 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17"} err="failed to get container status \"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17\": rpc error: code = NotFound desc = could not find container \"51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17\": container with ID starting with 51dd616346a14af2fe05d5d159e1fd7cc93027a033a70b55d7ab547c5d218d17 not found: ID does not exist" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.972424 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 15:20:45 crc kubenswrapper[4888]: I1006 15:20:45.979720 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.005189 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.051149 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.051194 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sbxg\" (UniqueName: \"kubernetes.io/projected/8735866e-24ad-472a-9a6c-c326841f1d30-kube-api-access-7sbxg\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.051279 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8735866e-24ad-472a-9a6c-c326841f1d30-logs\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.051318 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.051360 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-config-data\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.155313 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.158011 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sbxg\" (UniqueName: \"kubernetes.io/projected/8735866e-24ad-472a-9a6c-c326841f1d30-kube-api-access-7sbxg\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.158166 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8735866e-24ad-472a-9a6c-c326841f1d30-logs\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.158227 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.158283 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-config-data\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.159615 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8735866e-24ad-472a-9a6c-c326841f1d30-logs\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.170399 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.177319 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.177714 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-config-data\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.183023 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sbxg\" (UniqueName: \"kubernetes.io/projected/8735866e-24ad-472a-9a6c-c326841f1d30-kube-api-access-7sbxg\") pod \"nova-metadata-0\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.287258 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.473380 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.570602 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-scripts\") pod \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.570728 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-config-data\") pod \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.570759 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdtkg\" (UniqueName: \"kubernetes.io/projected/49b4955a-ce90-41d6-a9be-1b46072c3ab1-kube-api-access-fdtkg\") pod \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.570776 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-combined-ca-bundle\") pod \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\" (UID: \"49b4955a-ce90-41d6-a9be-1b46072c3ab1\") " Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.576264 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-scripts" (OuterVolumeSpecName: "scripts") pod "49b4955a-ce90-41d6-a9be-1b46072c3ab1" (UID: "49b4955a-ce90-41d6-a9be-1b46072c3ab1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.577653 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b4955a-ce90-41d6-a9be-1b46072c3ab1-kube-api-access-fdtkg" (OuterVolumeSpecName: "kube-api-access-fdtkg") pod "49b4955a-ce90-41d6-a9be-1b46072c3ab1" (UID: "49b4955a-ce90-41d6-a9be-1b46072c3ab1"). InnerVolumeSpecName "kube-api-access-fdtkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.613741 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49b4955a-ce90-41d6-a9be-1b46072c3ab1" (UID: "49b4955a-ce90-41d6-a9be-1b46072c3ab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.617630 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-config-data" (OuterVolumeSpecName: "config-data") pod "49b4955a-ce90-41d6-a9be-1b46072c3ab1" (UID: "49b4955a-ce90-41d6-a9be-1b46072c3ab1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.672875 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.672914 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.672926 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdtkg\" (UniqueName: \"kubernetes.io/projected/49b4955a-ce90-41d6-a9be-1b46072c3ab1-kube-api-access-fdtkg\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.672939 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49b4955a-ce90-41d6-a9be-1b46072c3ab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.805535 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.904324 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8735866e-24ad-472a-9a6c-c326841f1d30","Type":"ContainerStarted","Data":"a7f4cf77fee0c7b1090307bc2ebecb4c6f3036fec3720e737cafa31c676f5cc1"} Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.910096 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6727f" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.910129 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6727f" event={"ID":"49b4955a-ce90-41d6-a9be-1b46072c3ab1","Type":"ContainerDied","Data":"7197fde4c2108c7789ac82222cd5eaa6e1403a1327308df945bf45c38837d73b"} Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.910184 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7197fde4c2108c7789ac82222cd5eaa6e1403a1327308df945bf45c38837d73b" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.946176 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e76cb5c-4c75-43f9-9f66-95fb6e77c065" path="/var/lib/kubelet/pods/6e76cb5c-4c75-43f9-9f66-95fb6e77c065/volumes" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.999201 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 15:20:46 crc kubenswrapper[4888]: E1006 15:20:46.999690 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b4955a-ce90-41d6-a9be-1b46072c3ab1" containerName="nova-cell1-conductor-db-sync" Oct 06 15:20:46 crc kubenswrapper[4888]: I1006 15:20:46.999715 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b4955a-ce90-41d6-a9be-1b46072c3ab1" containerName="nova-cell1-conductor-db-sync" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.000622 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b4955a-ce90-41d6-a9be-1b46072c3ab1" containerName="nova-cell1-conductor-db-sync" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.001473 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.008519 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.014517 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.087578 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.087843 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.087948 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkss6\" (UniqueName: \"kubernetes.io/projected/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-kube-api-access-vkss6\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.189183 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkss6\" (UniqueName: \"kubernetes.io/projected/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-kube-api-access-vkss6\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.189385 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.189409 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.193088 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.201518 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.206364 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkss6\" (UniqueName: \"kubernetes.io/projected/873575d3-d1c1-4c76-a0f1-d9cb94724fe2-kube-api-access-vkss6\") pod \"nova-cell1-conductor-0\" (UID: \"873575d3-d1c1-4c76-a0f1-d9cb94724fe2\") " pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.321304 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.776887 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.922855 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8735866e-24ad-472a-9a6c-c326841f1d30","Type":"ContainerStarted","Data":"3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721"} Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.922904 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8735866e-24ad-472a-9a6c-c326841f1d30","Type":"ContainerStarted","Data":"fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6"} Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.925750 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"873575d3-d1c1-4c76-a0f1-d9cb94724fe2","Type":"ContainerStarted","Data":"8a01e84f2b918d7adbb34fbc2c807c2d3b853f941e3d3a241ca5bf549b0920e2"} Oct 06 15:20:47 crc kubenswrapper[4888]: I1006 15:20:47.950443 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.950426571 podStartE2EDuration="2.950426571s" podCreationTimestamp="2025-10-06 15:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:47.94817162 +0000 UTC m=+1187.760522328" watchObservedRunningTime="2025-10-06 15:20:47.950426571 +0000 UTC m=+1187.762777289" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.531639 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.621987 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-config-data\") pod \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.622105 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs68x\" (UniqueName: \"kubernetes.io/projected/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-kube-api-access-bs68x\") pod \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.622133 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-combined-ca-bundle\") pod \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.622506 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-logs\") pod \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\" (UID: \"aae63d96-7434-4e46-93b5-2dbd3d2afdb6\") " Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.623699 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-logs" (OuterVolumeSpecName: "logs") pod "aae63d96-7434-4e46-93b5-2dbd3d2afdb6" (UID: "aae63d96-7434-4e46-93b5-2dbd3d2afdb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.630010 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-kube-api-access-bs68x" (OuterVolumeSpecName: "kube-api-access-bs68x") pod "aae63d96-7434-4e46-93b5-2dbd3d2afdb6" (UID: "aae63d96-7434-4e46-93b5-2dbd3d2afdb6"). InnerVolumeSpecName "kube-api-access-bs68x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.649175 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-config-data" (OuterVolumeSpecName: "config-data") pod "aae63d96-7434-4e46-93b5-2dbd3d2afdb6" (UID: "aae63d96-7434-4e46-93b5-2dbd3d2afdb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.651090 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aae63d96-7434-4e46-93b5-2dbd3d2afdb6" (UID: "aae63d96-7434-4e46-93b5-2dbd3d2afdb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.724592 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.724660 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs68x\" (UniqueName: \"kubernetes.io/projected/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-kube-api-access-bs68x\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.724674 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.724683 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae63d96-7434-4e46-93b5-2dbd3d2afdb6-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.942377 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"873575d3-d1c1-4c76-a0f1-d9cb94724fe2","Type":"ContainerStarted","Data":"b176ca50fdfad652411f16978c36c4ec0109d3e6e4d18e1497d1fcc94e294ae9"} Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.944507 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.946974 4888 generic.go:334] "Generic (PLEG): container finished" podID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerID="8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10" exitCode=0 Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.947337 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.947504 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aae63d96-7434-4e46-93b5-2dbd3d2afdb6","Type":"ContainerDied","Data":"8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10"} Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.947546 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"aae63d96-7434-4e46-93b5-2dbd3d2afdb6","Type":"ContainerDied","Data":"ad8d0794265e3be3f4fa265b7714214c1e6d3bd7e80cc36f4c47cb14198d076b"} Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.947563 4888 scope.go:117] "RemoveContainer" containerID="8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.971330 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.971305633 podStartE2EDuration="2.971305633s" podCreationTimestamp="2025-10-06 15:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:48.964390985 +0000 UTC m=+1188.776741703" watchObservedRunningTime="2025-10-06 15:20:48.971305633 +0000 UTC m=+1188.783656351" Oct 06 15:20:48 crc kubenswrapper[4888]: I1006 15:20:48.977515 4888 scope.go:117] "RemoveContainer" containerID="e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.002110 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.011764 4888 scope.go:117] "RemoveContainer" containerID="8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10" Oct 06 15:20:49 crc kubenswrapper[4888]: E1006 15:20:49.014913 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10\": container with ID starting with 8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10 not found: ID does not exist" containerID="8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.019003 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10"} err="failed to get container status \"8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10\": rpc error: code = NotFound desc = could not find container \"8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10\": container with ID starting with 8e1fbcb9cc4dc7ebf3e0d6f17c9ec3b90d126609a113c351bbee496023ed8f10 not found: ID does not exist" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.019110 4888 scope.go:117] "RemoveContainer" containerID="e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163" Oct 06 15:20:49 crc kubenswrapper[4888]: E1006 15:20:49.022770 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163\": container with ID starting with e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163 not found: ID does not exist" containerID="e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.022890 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163"} err="failed to get container status \"e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163\": rpc error: code = NotFound desc = could not find container \"e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163\": container with ID starting with e215162a429f14e51906f4cfa57b2999627860243fb99a96e636c08937c50163 not found: ID does not exist" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.023372 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.036763 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:49 crc kubenswrapper[4888]: E1006 15:20:49.037593 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-log" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.037704 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-log" Oct 06 15:20:49 crc kubenswrapper[4888]: E1006 15:20:49.037809 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-api" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.037873 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-api" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.038122 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-log" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.038223 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" containerName="nova-api-api" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.039273 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.041692 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.062877 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.130338 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlxs\" (UniqueName: \"kubernetes.io/projected/3541442a-1534-4d51-8696-a0ca98b1b950-kube-api-access-5xlxs\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.130411 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-config-data\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.130446 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3541442a-1534-4d51-8696-a0ca98b1b950-logs\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.130538 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: E1006 15:20:49.148961 4888 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 15:20:49 crc kubenswrapper[4888]: E1006 15:20:49.150232 4888 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 15:20:49 crc kubenswrapper[4888]: E1006 15:20:49.151942 4888 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 15:20:49 crc kubenswrapper[4888]: E1006 15:20:49.151980 4888 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3203fee2-fb4f-455d-8919-029705d3b1df" containerName="nova-scheduler-scheduler" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.231991 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xlxs\" (UniqueName: \"kubernetes.io/projected/3541442a-1534-4d51-8696-a0ca98b1b950-kube-api-access-5xlxs\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.232078 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-config-data\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.232115 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3541442a-1534-4d51-8696-a0ca98b1b950-logs\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.232638 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3541442a-1534-4d51-8696-a0ca98b1b950-logs\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.233003 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.237277 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-config-data\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.248311 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.249603 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xlxs\" (UniqueName: \"kubernetes.io/projected/3541442a-1534-4d51-8696-a0ca98b1b950-kube-api-access-5xlxs\") pod \"nova-api-0\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.360404 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.777991 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.845218 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-config-data\") pod \"3203fee2-fb4f-455d-8919-029705d3b1df\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.845389 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzxd6\" (UniqueName: \"kubernetes.io/projected/3203fee2-fb4f-455d-8919-029705d3b1df-kube-api-access-wzxd6\") pod \"3203fee2-fb4f-455d-8919-029705d3b1df\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.845526 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-combined-ca-bundle\") pod \"3203fee2-fb4f-455d-8919-029705d3b1df\" (UID: \"3203fee2-fb4f-455d-8919-029705d3b1df\") " Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.872164 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3203fee2-fb4f-455d-8919-029705d3b1df-kube-api-access-wzxd6" (OuterVolumeSpecName: "kube-api-access-wzxd6") pod "3203fee2-fb4f-455d-8919-029705d3b1df" (UID: "3203fee2-fb4f-455d-8919-029705d3b1df"). InnerVolumeSpecName "kube-api-access-wzxd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.903622 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-config-data" (OuterVolumeSpecName: "config-data") pod "3203fee2-fb4f-455d-8919-029705d3b1df" (UID: "3203fee2-fb4f-455d-8919-029705d3b1df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.907949 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3203fee2-fb4f-455d-8919-029705d3b1df" (UID: "3203fee2-fb4f-455d-8919-029705d3b1df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.947954 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzxd6\" (UniqueName: \"kubernetes.io/projected/3203fee2-fb4f-455d-8919-029705d3b1df-kube-api-access-wzxd6\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.947996 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.948009 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3203fee2-fb4f-455d-8919-029705d3b1df-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.964514 4888 generic.go:334] "Generic (PLEG): container finished" podID="3203fee2-fb4f-455d-8919-029705d3b1df" containerID="be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134" exitCode=0 Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.964593 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3203fee2-fb4f-455d-8919-029705d3b1df","Type":"ContainerDied","Data":"be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134"} Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.964625 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3203fee2-fb4f-455d-8919-029705d3b1df","Type":"ContainerDied","Data":"ba9c20f771cbe5b8db8c6452355bb7595b4b53c076fe169b3b688748dbc507b7"} Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.964644 4888 scope.go:117] "RemoveContainer" containerID="be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134" Oct 06 15:20:49 crc kubenswrapper[4888]: I1006 15:20:49.964751 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.011438 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.032741 4888 scope.go:117] "RemoveContainer" containerID="be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134" Oct 06 15:20:50 crc kubenswrapper[4888]: E1006 15:20:50.034003 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134\": container with ID starting with be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134 not found: ID does not exist" containerID="be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.034136 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134"} err="failed to get container status \"be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134\": rpc error: code = NotFound desc = could not find container \"be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134\": container with ID starting with be7e61f2867d536bd4b2087a2f85a2fc84eabef13c70a5ef4dea08f7692aa134 not found: ID does not exist" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.060067 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.073307 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.084891 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:50 crc kubenswrapper[4888]: E1006 15:20:50.085588 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3203fee2-fb4f-455d-8919-029705d3b1df" containerName="nova-scheduler-scheduler" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.085677 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="3203fee2-fb4f-455d-8919-029705d3b1df" containerName="nova-scheduler-scheduler" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.086025 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="3203fee2-fb4f-455d-8919-029705d3b1df" containerName="nova-scheduler-scheduler" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.086943 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.091134 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.093706 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.257576 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.257666 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-config-data\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.257744 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x994r\" (UniqueName: \"kubernetes.io/projected/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-kube-api-access-x994r\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.358990 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x994r\" (UniqueName: \"kubernetes.io/projected/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-kube-api-access-x994r\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.359126 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.359230 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-config-data\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.374728 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-config-data\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.379472 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.380063 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x994r\" (UniqueName: \"kubernetes.io/projected/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-kube-api-access-x994r\") pod \"nova-scheduler-0\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.415120 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.469182 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.561760 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-run-httpd\") pod \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.562164 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-sg-core-conf-yaml\") pod \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.562279 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-log-httpd\") pod \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.562434 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-combined-ca-bundle\") pod \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.562539 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-config-data\") pod \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.562642 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-scripts\") pod \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.562689 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpn7d\" (UniqueName: \"kubernetes.io/projected/4735cf9f-0b78-4094-8431-f15fe0bf7d34-kube-api-access-dpn7d\") pod \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\" (UID: \"4735cf9f-0b78-4094-8431-f15fe0bf7d34\") " Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.564395 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4735cf9f-0b78-4094-8431-f15fe0bf7d34" (UID: "4735cf9f-0b78-4094-8431-f15fe0bf7d34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.564408 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4735cf9f-0b78-4094-8431-f15fe0bf7d34" (UID: "4735cf9f-0b78-4094-8431-f15fe0bf7d34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.568834 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-scripts" (OuterVolumeSpecName: "scripts") pod "4735cf9f-0b78-4094-8431-f15fe0bf7d34" (UID: "4735cf9f-0b78-4094-8431-f15fe0bf7d34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.573381 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4735cf9f-0b78-4094-8431-f15fe0bf7d34-kube-api-access-dpn7d" (OuterVolumeSpecName: "kube-api-access-dpn7d") pod "4735cf9f-0b78-4094-8431-f15fe0bf7d34" (UID: "4735cf9f-0b78-4094-8431-f15fe0bf7d34"). InnerVolumeSpecName "kube-api-access-dpn7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.636976 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4735cf9f-0b78-4094-8431-f15fe0bf7d34" (UID: "4735cf9f-0b78-4094-8431-f15fe0bf7d34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.668699 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.672118 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpn7d\" (UniqueName: \"kubernetes.io/projected/4735cf9f-0b78-4094-8431-f15fe0bf7d34-kube-api-access-dpn7d\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.672155 4888 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.672171 4888 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.672182 4888 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4735cf9f-0b78-4094-8431-f15fe0bf7d34-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.744109 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4735cf9f-0b78-4094-8431-f15fe0bf7d34" (UID: "4735cf9f-0b78-4094-8431-f15fe0bf7d34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.756287 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-config-data" (OuterVolumeSpecName: "config-data") pod "4735cf9f-0b78-4094-8431-f15fe0bf7d34" (UID: "4735cf9f-0b78-4094-8431-f15fe0bf7d34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.778331 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.778385 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4735cf9f-0b78-4094-8431-f15fe0bf7d34-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.936644 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3203fee2-fb4f-455d-8919-029705d3b1df" path="/var/lib/kubelet/pods/3203fee2-fb4f-455d-8919-029705d3b1df/volumes" Oct 06 15:20:50 crc kubenswrapper[4888]: I1006 15:20:50.938562 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae63d96-7434-4e46-93b5-2dbd3d2afdb6" path="/var/lib/kubelet/pods/aae63d96-7434-4e46-93b5-2dbd3d2afdb6/volumes" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:50.999644 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3541442a-1534-4d51-8696-a0ca98b1b950","Type":"ContainerStarted","Data":"85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d"} Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:50.999692 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3541442a-1534-4d51-8696-a0ca98b1b950","Type":"ContainerStarted","Data":"6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265"} Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:50.999702 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3541442a-1534-4d51-8696-a0ca98b1b950","Type":"ContainerStarted","Data":"03c61bc4157071e194e2801df46539e2f5910ce81f385bed63ebc3b9b7f1f489"} Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.009422 4888 generic.go:334] "Generic (PLEG): container finished" podID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerID="b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c" exitCode=0 Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.011290 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.011654 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerDied","Data":"b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c"} Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.011690 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4735cf9f-0b78-4094-8431-f15fe0bf7d34","Type":"ContainerDied","Data":"51ef018b1fa1fd76f4135b02edd5850583e21120d179499dc77b6e6246af16ad"} Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.011714 4888 scope.go:117] "RemoveContainer" containerID="01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.026243 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.026220692 podStartE2EDuration="3.026220692s" podCreationTimestamp="2025-10-06 15:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:51.024015092 +0000 UTC m=+1190.836365820" watchObservedRunningTime="2025-10-06 15:20:51.026220692 +0000 UTC m=+1190.838571410" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.064881 4888 scope.go:117] "RemoveContainer" containerID="ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.083048 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.102707 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:51 crc kubenswrapper[4888]: W1006 15:20:51.109005 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26576ea5_0f85_4ca1_bf66_11c6f5d8edb6.slice/crio-775d8fa5701ff892bf4bb9d5291cad4969dbbe639c0610a7a806485235f016ce WatchSource:0}: Error finding container 775d8fa5701ff892bf4bb9d5291cad4969dbbe639c0610a7a806485235f016ce: Status 404 returned error can't find the container with id 775d8fa5701ff892bf4bb9d5291cad4969dbbe639c0610a7a806485235f016ce Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.109044 4888 scope.go:117] "RemoveContainer" containerID="b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.126766 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.127572 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="proxy-httpd" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.127599 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="proxy-httpd" Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.127639 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="sg-core" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.127650 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="sg-core" Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.127665 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="ceilometer-notification-agent" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.127672 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="ceilometer-notification-agent" Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.127686 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="ceilometer-central-agent" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.127693 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="ceilometer-central-agent" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.128052 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="ceilometer-notification-agent" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.128071 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="sg-core" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.128084 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="proxy-httpd" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.128102 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" containerName="ceilometer-central-agent" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.131771 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.133989 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.135652 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.135960 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.143349 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.155126 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.250605 4888 scope.go:117] "RemoveContainer" containerID="07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78" Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.275888 4888 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4735cf9f_0b78_4094_8431_f15fe0bf7d34.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4735cf9f_0b78_4094_8431_f15fe0bf7d34.slice/crio-51ef018b1fa1fd76f4135b02edd5850583e21120d179499dc77b6e6246af16ad\": RecentStats: unable to find data in memory cache]" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.286381 4888 scope.go:117] "RemoveContainer" containerID="01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a" Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.287577 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a\": container with ID starting with 01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a not found: ID does not exist" containerID="01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.287620 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a"} err="failed to get container status \"01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a\": rpc error: code = NotFound desc = could not find container \"01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a\": container with ID starting with 01f6815c4e72638466298a340124b8d9cff238495e34bc689ec174a32545ad1a not found: ID does not exist" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.287647 4888 scope.go:117] "RemoveContainer" containerID="ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4" Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.287889 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4\": container with ID starting with ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4 not found: ID does not exist" containerID="ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.287970 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4"} err="failed to get container status \"ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4\": rpc error: code = NotFound desc = could not find container \"ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4\": container with ID starting with ee1b68cdb2f5d657687d8af919a9bd446b16bc2bc1852a80af8218a5d6d3c6e4 not found: ID does not exist" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.287990 4888 scope.go:117] "RemoveContainer" containerID="b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.288224 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.289420 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.291663 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c\": container with ID starting with b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c not found: ID does not exist" containerID="b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.291705 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c"} err="failed to get container status \"b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c\": rpc error: code = NotFound desc = could not find container \"b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c\": container with ID starting with b2a5ab004c61cbae6aedcec99b1812f8c380941b73ad51280ca46989f88e144c not found: ID does not exist" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.291734 4888 scope.go:117] "RemoveContainer" containerID="07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78" Oct 06 15:20:51 crc kubenswrapper[4888]: E1006 15:20:51.292412 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78\": container with ID starting with 07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78 not found: ID does not exist" containerID="07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.292434 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78"} err="failed to get container status \"07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78\": rpc error: code = NotFound desc = could not find container \"07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78\": container with ID starting with 07e88a6629ae3a602dd36b8f37890b72894348845ed693a9623654704ce72f78 not found: ID does not exist" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.298407 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.299750 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-config-data\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.299942 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.300059 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-log-httpd\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.300737 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.300858 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-run-httpd\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.300952 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-scripts\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.301098 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpxrk\" (UniqueName: \"kubernetes.io/projected/9577fcf2-ea6b-47ef-8d09-c35602fe127a-kube-api-access-lpxrk\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403062 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-log-httpd\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403309 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403398 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-run-httpd\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403428 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-scripts\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403466 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpxrk\" (UniqueName: \"kubernetes.io/projected/9577fcf2-ea6b-47ef-8d09-c35602fe127a-kube-api-access-lpxrk\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403592 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-log-httpd\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403608 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403716 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-config-data\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.403754 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.404114 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-run-httpd\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.408977 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-scripts\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.410103 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.410709 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-config-data\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.411731 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.413829 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.422937 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpxrk\" (UniqueName: \"kubernetes.io/projected/9577fcf2-ea6b-47ef-8d09-c35602fe127a-kube-api-access-lpxrk\") pod \"ceilometer-0\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " pod="openstack/ceilometer-0" Oct 06 15:20:51 crc kubenswrapper[4888]: I1006 15:20:51.577524 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:20:52 crc kubenswrapper[4888]: I1006 15:20:52.023060 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6","Type":"ContainerStarted","Data":"ca337e8a3fd253caee21977972b39d38dec83799f9a0e3bb680510e11da9efca"} Oct 06 15:20:52 crc kubenswrapper[4888]: I1006 15:20:52.023349 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6","Type":"ContainerStarted","Data":"775d8fa5701ff892bf4bb9d5291cad4969dbbe639c0610a7a806485235f016ce"} Oct 06 15:20:52 crc kubenswrapper[4888]: I1006 15:20:52.044858 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.044837191 podStartE2EDuration="2.044837191s" podCreationTimestamp="2025-10-06 15:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:20:52.040406831 +0000 UTC m=+1191.852757559" watchObservedRunningTime="2025-10-06 15:20:52.044837191 +0000 UTC m=+1191.857187909" Oct 06 15:20:52 crc kubenswrapper[4888]: I1006 15:20:52.119631 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:20:52 crc kubenswrapper[4888]: W1006 15:20:52.126334 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9577fcf2_ea6b_47ef_8d09_c35602fe127a.slice/crio-99ce52ed845a2c14577a351e3d020c2237e0b4b1be57e6e15d563c1118dc1e84 WatchSource:0}: Error finding container 99ce52ed845a2c14577a351e3d020c2237e0b4b1be57e6e15d563c1118dc1e84: Status 404 returned error can't find the container with id 99ce52ed845a2c14577a351e3d020c2237e0b4b1be57e6e15d563c1118dc1e84 Oct 06 15:20:52 crc kubenswrapper[4888]: I1006 15:20:52.330351 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 06 15:20:52 crc kubenswrapper[4888]: I1006 15:20:52.931840 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4735cf9f-0b78-4094-8431-f15fe0bf7d34" path="/var/lib/kubelet/pods/4735cf9f-0b78-4094-8431-f15fe0bf7d34/volumes" Oct 06 15:20:53 crc kubenswrapper[4888]: I1006 15:20:53.043067 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerStarted","Data":"99ce52ed845a2c14577a351e3d020c2237e0b4b1be57e6e15d563c1118dc1e84"} Oct 06 15:20:54 crc kubenswrapper[4888]: I1006 15:20:54.055633 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerStarted","Data":"423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9"} Oct 06 15:20:54 crc kubenswrapper[4888]: I1006 15:20:54.056112 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerStarted","Data":"03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3"} Oct 06 15:20:55 crc kubenswrapper[4888]: I1006 15:20:55.066749 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerStarted","Data":"104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374"} Oct 06 15:20:55 crc kubenswrapper[4888]: I1006 15:20:55.470315 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 15:20:56 crc kubenswrapper[4888]: I1006 15:20:56.078722 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerStarted","Data":"8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb"} Oct 06 15:20:56 crc kubenswrapper[4888]: I1006 15:20:56.080342 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 15:20:56 crc kubenswrapper[4888]: I1006 15:20:56.109747 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.676176216 podStartE2EDuration="5.10972681s" podCreationTimestamp="2025-10-06 15:20:51 +0000 UTC" firstStartedPulling="2025-10-06 15:20:52.128778539 +0000 UTC m=+1191.941129257" lastFinishedPulling="2025-10-06 15:20:55.562329123 +0000 UTC m=+1195.374679851" observedRunningTime="2025-10-06 15:20:56.105468816 +0000 UTC m=+1195.917819534" watchObservedRunningTime="2025-10-06 15:20:56.10972681 +0000 UTC m=+1195.922077528" Oct 06 15:20:56 crc kubenswrapper[4888]: I1006 15:20:56.288748 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 15:20:56 crc kubenswrapper[4888]: I1006 15:20:56.288786 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 15:20:57 crc kubenswrapper[4888]: I1006 15:20:57.305107 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:20:57 crc kubenswrapper[4888]: I1006 15:20:57.305158 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:20:57 crc kubenswrapper[4888]: I1006 15:20:57.354026 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 06 15:20:59 crc kubenswrapper[4888]: I1006 15:20:59.361921 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 15:20:59 crc kubenswrapper[4888]: I1006 15:20:59.362391 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 15:21:00 crc kubenswrapper[4888]: I1006 15:21:00.444024 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 15:21:00 crc kubenswrapper[4888]: I1006 15:21:00.444081 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 15:21:00 crc kubenswrapper[4888]: I1006 15:21:00.470636 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 15:21:00 crc kubenswrapper[4888]: I1006 15:21:00.506415 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 15:21:01 crc kubenswrapper[4888]: I1006 15:21:01.162488 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.170174 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.182346 4888 generic.go:334] "Generic (PLEG): container finished" podID="b4ef5f28-b230-49fd-9858-afe474e4cebe" containerID="5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2" exitCode=137 Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.182408 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4ef5f28-b230-49fd-9858-afe474e4cebe","Type":"ContainerDied","Data":"5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2"} Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.182450 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b4ef5f28-b230-49fd-9858-afe474e4cebe","Type":"ContainerDied","Data":"5b6647287b38802acd912000f2de2e33c0895a8d948cec353b67eb8627ce5519"} Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.182449 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.182472 4888 scope.go:117] "RemoveContainer" containerID="5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.219225 4888 scope.go:117] "RemoveContainer" containerID="5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2" Oct 06 15:21:06 crc kubenswrapper[4888]: E1006 15:21:06.219698 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2\": container with ID starting with 5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2 not found: ID does not exist" containerID="5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.219751 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2"} err="failed to get container status \"5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2\": rpc error: code = NotFound desc = could not find container \"5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2\": container with ID starting with 5641d2daf4c02c69bf46dd49b6380988aa900e19c53465b790d3ca52349c2df2 not found: ID does not exist" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.294469 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-combined-ca-bundle\") pod \"b4ef5f28-b230-49fd-9858-afe474e4cebe\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.294535 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-config-data\") pod \"b4ef5f28-b230-49fd-9858-afe474e4cebe\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.294808 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl5lz\" (UniqueName: \"kubernetes.io/projected/b4ef5f28-b230-49fd-9858-afe474e4cebe-kube-api-access-zl5lz\") pod \"b4ef5f28-b230-49fd-9858-afe474e4cebe\" (UID: \"b4ef5f28-b230-49fd-9858-afe474e4cebe\") " Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.297096 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.297394 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.302425 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ef5f28-b230-49fd-9858-afe474e4cebe-kube-api-access-zl5lz" (OuterVolumeSpecName: "kube-api-access-zl5lz") pod "b4ef5f28-b230-49fd-9858-afe474e4cebe" (UID: "b4ef5f28-b230-49fd-9858-afe474e4cebe"). InnerVolumeSpecName "kube-api-access-zl5lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.326407 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.328315 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4ef5f28-b230-49fd-9858-afe474e4cebe" (UID: "b4ef5f28-b230-49fd-9858-afe474e4cebe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.333939 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-config-data" (OuterVolumeSpecName: "config-data") pod "b4ef5f28-b230-49fd-9858-afe474e4cebe" (UID: "b4ef5f28-b230-49fd-9858-afe474e4cebe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.397307 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl5lz\" (UniqueName: \"kubernetes.io/projected/b4ef5f28-b230-49fd-9858-afe474e4cebe-kube-api-access-zl5lz\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.397639 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.397740 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ef5f28-b230-49fd-9858-afe474e4cebe-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.514964 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.525771 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.549441 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:21:06 crc kubenswrapper[4888]: E1006 15:21:06.550162 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ef5f28-b230-49fd-9858-afe474e4cebe" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.550254 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ef5f28-b230-49fd-9858-afe474e4cebe" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.550634 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ef5f28-b230-49fd-9858-afe474e4cebe" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.551355 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.558387 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.558399 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.558877 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.593099 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.702457 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.702762 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.702845 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.702870 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.702900 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxc2j\" (UniqueName: \"kubernetes.io/projected/d3a38734-2da1-478a-8d16-dae1c736838e-kube-api-access-kxc2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.804483 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.804629 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.804696 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.804735 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxc2j\" (UniqueName: \"kubernetes.io/projected/d3a38734-2da1-478a-8d16-dae1c736838e-kube-api-access-kxc2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.804890 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.809430 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.813077 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.813608 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.814182 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a38734-2da1-478a-8d16-dae1c736838e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.821840 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxc2j\" (UniqueName: \"kubernetes.io/projected/d3a38734-2da1-478a-8d16-dae1c736838e-kube-api-access-kxc2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3a38734-2da1-478a-8d16-dae1c736838e\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.900436 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:06 crc kubenswrapper[4888]: I1006 15:21:06.934251 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ef5f28-b230-49fd-9858-afe474e4cebe" path="/var/lib/kubelet/pods/b4ef5f28-b230-49fd-9858-afe474e4cebe/volumes" Oct 06 15:21:07 crc kubenswrapper[4888]: I1006 15:21:07.202089 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 15:21:07 crc kubenswrapper[4888]: I1006 15:21:07.349164 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 15:21:07 crc kubenswrapper[4888]: W1006 15:21:07.357064 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3a38734_2da1_478a_8d16_dae1c736838e.slice/crio-8906928352a27d1d12e4573c4355b865e629db567ce1fce4d4ff6fa302803610 WatchSource:0}: Error finding container 8906928352a27d1d12e4573c4355b865e629db567ce1fce4d4ff6fa302803610: Status 404 returned error can't find the container with id 8906928352a27d1d12e4573c4355b865e629db567ce1fce4d4ff6fa302803610 Oct 06 15:21:08 crc kubenswrapper[4888]: I1006 15:21:08.208645 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3a38734-2da1-478a-8d16-dae1c736838e","Type":"ContainerStarted","Data":"cb57b85957e1fdbdb558ebf26cfe5cf2ff63f0e2803e2267dbd59b80069db6e2"} Oct 06 15:21:08 crc kubenswrapper[4888]: I1006 15:21:08.211473 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3a38734-2da1-478a-8d16-dae1c736838e","Type":"ContainerStarted","Data":"8906928352a27d1d12e4573c4355b865e629db567ce1fce4d4ff6fa302803610"} Oct 06 15:21:08 crc kubenswrapper[4888]: I1006 15:21:08.234449 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.234427569 podStartE2EDuration="2.234427569s" podCreationTimestamp="2025-10-06 15:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:21:08.223639249 +0000 UTC m=+1208.035989977" watchObservedRunningTime="2025-10-06 15:21:08.234427569 +0000 UTC m=+1208.046778287" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.366910 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.367333 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.367588 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.367678 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.373577 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.375895 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.588314 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-9vj7k"] Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.590349 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.608907 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-9vj7k"] Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.763936 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.764250 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.764772 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.764855 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpn7f\" (UniqueName: \"kubernetes.io/projected/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-kube-api-access-kpn7f\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.764921 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-config\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.764953 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.867225 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.867332 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpn7f\" (UniqueName: \"kubernetes.io/projected/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-kube-api-access-kpn7f\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.867407 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-config\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.867480 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.867636 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.867770 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.868940 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.868931 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.868947 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.869388 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-config\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.869477 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.895768 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpn7f\" (UniqueName: \"kubernetes.io/projected/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-kube-api-access-kpn7f\") pod \"dnsmasq-dns-59cf4bdb65-9vj7k\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:09 crc kubenswrapper[4888]: I1006 15:21:09.930492 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:10 crc kubenswrapper[4888]: I1006 15:21:10.438534 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-9vj7k"] Oct 06 15:21:11 crc kubenswrapper[4888]: I1006 15:21:11.268584 4888 generic.go:334] "Generic (PLEG): container finished" podID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" containerID="a8ffb63741cd0ec347b6f83d7ea1125802e8c685c501747fb8567685a0d361ec" exitCode=0 Oct 06 15:21:11 crc kubenswrapper[4888]: I1006 15:21:11.268681 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" event={"ID":"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc","Type":"ContainerDied","Data":"a8ffb63741cd0ec347b6f83d7ea1125802e8c685c501747fb8567685a0d361ec"} Oct 06 15:21:11 crc kubenswrapper[4888]: I1006 15:21:11.269231 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" event={"ID":"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc","Type":"ContainerStarted","Data":"6ab95e6cedad6302cf1d3df9e3095b986837be719073bc90c3cba68d8a1171ce"} Oct 06 15:21:11 crc kubenswrapper[4888]: I1006 15:21:11.901647 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.007590 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.281009 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-log" containerID="cri-o://6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265" gracePeriod=30 Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.281430 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" event={"ID":"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc","Type":"ContainerStarted","Data":"916ddaaec0160c5db2506143914fec08b2c5a9cc60f1e0734b1666717ad31679"} Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.281694 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-api" containerID="cri-o://85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d" gracePeriod=30 Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.282262 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.314152 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" podStartSLOduration=3.314128514 podStartE2EDuration="3.314128514s" podCreationTimestamp="2025-10-06 15:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:21:12.30195485 +0000 UTC m=+1212.114305568" watchObservedRunningTime="2025-10-06 15:21:12.314128514 +0000 UTC m=+1212.126479232" Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.500009 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.500368 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="ceilometer-central-agent" containerID="cri-o://03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3" gracePeriod=30 Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.500502 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="proxy-httpd" containerID="cri-o://8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb" gracePeriod=30 Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.500554 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="sg-core" containerID="cri-o://104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374" gracePeriod=30 Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.500606 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="ceilometer-notification-agent" containerID="cri-o://423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9" gracePeriod=30 Oct 06 15:21:12 crc kubenswrapper[4888]: I1006 15:21:12.516248 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.199:3000/\": EOF" Oct 06 15:21:13 crc kubenswrapper[4888]: I1006 15:21:13.291464 4888 generic.go:334] "Generic (PLEG): container finished" podID="3541442a-1534-4d51-8696-a0ca98b1b950" containerID="6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265" exitCode=143 Oct 06 15:21:13 crc kubenswrapper[4888]: I1006 15:21:13.291496 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3541442a-1534-4d51-8696-a0ca98b1b950","Type":"ContainerDied","Data":"6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265"} Oct 06 15:21:13 crc kubenswrapper[4888]: I1006 15:21:13.293395 4888 generic.go:334] "Generic (PLEG): container finished" podID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerID="8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb" exitCode=0 Oct 06 15:21:13 crc kubenswrapper[4888]: I1006 15:21:13.293414 4888 generic.go:334] "Generic (PLEG): container finished" podID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerID="104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374" exitCode=2 Oct 06 15:21:13 crc kubenswrapper[4888]: I1006 15:21:13.293424 4888 generic.go:334] "Generic (PLEG): container finished" podID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerID="03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3" exitCode=0 Oct 06 15:21:13 crc kubenswrapper[4888]: I1006 15:21:13.293484 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerDied","Data":"8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb"} Oct 06 15:21:13 crc kubenswrapper[4888]: I1006 15:21:13.293540 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerDied","Data":"104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374"} Oct 06 15:21:13 crc kubenswrapper[4888]: I1006 15:21:13.293555 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerDied","Data":"03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3"} Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.866926 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.897220 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3541442a-1534-4d51-8696-a0ca98b1b950-logs\") pod \"3541442a-1534-4d51-8696-a0ca98b1b950\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.897391 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xlxs\" (UniqueName: \"kubernetes.io/projected/3541442a-1534-4d51-8696-a0ca98b1b950-kube-api-access-5xlxs\") pod \"3541442a-1534-4d51-8696-a0ca98b1b950\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.897491 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-config-data\") pod \"3541442a-1534-4d51-8696-a0ca98b1b950\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.897520 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-combined-ca-bundle\") pod \"3541442a-1534-4d51-8696-a0ca98b1b950\" (UID: \"3541442a-1534-4d51-8696-a0ca98b1b950\") " Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.897898 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3541442a-1534-4d51-8696-a0ca98b1b950-logs" (OuterVolumeSpecName: "logs") pod "3541442a-1534-4d51-8696-a0ca98b1b950" (UID: "3541442a-1534-4d51-8696-a0ca98b1b950"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.898210 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3541442a-1534-4d51-8696-a0ca98b1b950-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.905033 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3541442a-1534-4d51-8696-a0ca98b1b950-kube-api-access-5xlxs" (OuterVolumeSpecName: "kube-api-access-5xlxs") pod "3541442a-1534-4d51-8696-a0ca98b1b950" (UID: "3541442a-1534-4d51-8696-a0ca98b1b950"). InnerVolumeSpecName "kube-api-access-5xlxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:21:15 crc kubenswrapper[4888]: I1006 15:21:15.949485 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-config-data" (OuterVolumeSpecName: "config-data") pod "3541442a-1534-4d51-8696-a0ca98b1b950" (UID: "3541442a-1534-4d51-8696-a0ca98b1b950"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.000169 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xlxs\" (UniqueName: \"kubernetes.io/projected/3541442a-1534-4d51-8696-a0ca98b1b950-kube-api-access-5xlxs\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.000210 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.021072 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3541442a-1534-4d51-8696-a0ca98b1b950" (UID: "3541442a-1534-4d51-8696-a0ca98b1b950"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.101747 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3541442a-1534-4d51-8696-a0ca98b1b950-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.346429 4888 generic.go:334] "Generic (PLEG): container finished" podID="3541442a-1534-4d51-8696-a0ca98b1b950" containerID="85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d" exitCode=0 Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.346474 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3541442a-1534-4d51-8696-a0ca98b1b950","Type":"ContainerDied","Data":"85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d"} Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.346500 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3541442a-1534-4d51-8696-a0ca98b1b950","Type":"ContainerDied","Data":"03c61bc4157071e194e2801df46539e2f5910ce81f385bed63ebc3b9b7f1f489"} Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.346517 4888 scope.go:117] "RemoveContainer" containerID="85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.346660 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.387347 4888 scope.go:117] "RemoveContainer" containerID="6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.406005 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.411883 4888 scope.go:117] "RemoveContainer" containerID="85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d" Oct 06 15:21:16 crc kubenswrapper[4888]: E1006 15:21:16.412296 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d\": container with ID starting with 85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d not found: ID does not exist" containerID="85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.412328 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d"} err="failed to get container status \"85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d\": rpc error: code = NotFound desc = could not find container \"85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d\": container with ID starting with 85441945b185fb06c761ccaf23885be1fac661f25ea9742980b6f063e5e54b3d not found: ID does not exist" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.412349 4888 scope.go:117] "RemoveContainer" containerID="6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265" Oct 06 15:21:16 crc kubenswrapper[4888]: E1006 15:21:16.412673 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265\": container with ID starting with 6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265 not found: ID does not exist" containerID="6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.412710 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265"} err="failed to get container status \"6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265\": rpc error: code = NotFound desc = could not find container \"6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265\": container with ID starting with 6984972ad91e033967c4f5759720e7a264fda9da5ae93cf5cfd0134477336265 not found: ID does not exist" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.418279 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.436681 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:16 crc kubenswrapper[4888]: E1006 15:21:16.437100 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-log" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.437118 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-log" Oct 06 15:21:16 crc kubenswrapper[4888]: E1006 15:21:16.437142 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-api" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.437149 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-api" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.437326 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-log" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.437352 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" containerName="nova-api-api" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.438870 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.443117 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.443435 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.443576 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.452161 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.516906 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-config-data\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.517051 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7bd1ed-eb83-4617-9479-732e3fdb47a8-logs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.517129 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.517252 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-public-tls-certs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.517345 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfbdl\" (UniqueName: \"kubernetes.io/projected/db7bd1ed-eb83-4617-9479-732e3fdb47a8-kube-api-access-jfbdl\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.517401 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.618672 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.618744 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-public-tls-certs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.618780 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfbdl\" (UniqueName: \"kubernetes.io/projected/db7bd1ed-eb83-4617-9479-732e3fdb47a8-kube-api-access-jfbdl\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.618857 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.618900 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-config-data\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.618968 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7bd1ed-eb83-4617-9479-732e3fdb47a8-logs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.621421 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7bd1ed-eb83-4617-9479-732e3fdb47a8-logs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.625703 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-config-data\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.628317 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.642599 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-public-tls-certs\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.642780 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.643299 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfbdl\" (UniqueName: \"kubernetes.io/projected/db7bd1ed-eb83-4617-9479-732e3fdb47a8-kube-api-access-jfbdl\") pod \"nova-api-0\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.760571 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.902082 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.954857 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3541442a-1534-4d51-8696-a0ca98b1b950" path="/var/lib/kubelet/pods/3541442a-1534-4d51-8696-a0ca98b1b950/volumes" Oct 06 15:21:16 crc kubenswrapper[4888]: I1006 15:21:16.955713 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.048310 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.128970 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-sg-core-conf-yaml\") pod \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.129055 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-config-data\") pod \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.129162 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpxrk\" (UniqueName: \"kubernetes.io/projected/9577fcf2-ea6b-47ef-8d09-c35602fe127a-kube-api-access-lpxrk\") pod \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.129250 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-run-httpd\") pod \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.129326 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-combined-ca-bundle\") pod \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.129384 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-scripts\") pod \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.129411 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-log-httpd\") pod \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.129443 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-ceilometer-tls-certs\") pod \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\" (UID: \"9577fcf2-ea6b-47ef-8d09-c35602fe127a\") " Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.130026 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9577fcf2-ea6b-47ef-8d09-c35602fe127a" (UID: "9577fcf2-ea6b-47ef-8d09-c35602fe127a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.131394 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9577fcf2-ea6b-47ef-8d09-c35602fe127a" (UID: "9577fcf2-ea6b-47ef-8d09-c35602fe127a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.138626 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-scripts" (OuterVolumeSpecName: "scripts") pod "9577fcf2-ea6b-47ef-8d09-c35602fe127a" (UID: "9577fcf2-ea6b-47ef-8d09-c35602fe127a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.171309 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9577fcf2-ea6b-47ef-8d09-c35602fe127a-kube-api-access-lpxrk" (OuterVolumeSpecName: "kube-api-access-lpxrk") pod "9577fcf2-ea6b-47ef-8d09-c35602fe127a" (UID: "9577fcf2-ea6b-47ef-8d09-c35602fe127a"). InnerVolumeSpecName "kube-api-access-lpxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.207905 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9577fcf2-ea6b-47ef-8d09-c35602fe127a" (UID: "9577fcf2-ea6b-47ef-8d09-c35602fe127a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.231752 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.231783 4888 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.231811 4888 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.231826 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpxrk\" (UniqueName: \"kubernetes.io/projected/9577fcf2-ea6b-47ef-8d09-c35602fe127a-kube-api-access-lpxrk\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.231843 4888 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9577fcf2-ea6b-47ef-8d09-c35602fe127a-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.250813 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9577fcf2-ea6b-47ef-8d09-c35602fe127a" (UID: "9577fcf2-ea6b-47ef-8d09-c35602fe127a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.257135 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9577fcf2-ea6b-47ef-8d09-c35602fe127a" (UID: "9577fcf2-ea6b-47ef-8d09-c35602fe127a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.302904 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-config-data" (OuterVolumeSpecName: "config-data") pod "9577fcf2-ea6b-47ef-8d09-c35602fe127a" (UID: "9577fcf2-ea6b-47ef-8d09-c35602fe127a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.325070 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.333252 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.333285 4888 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.333297 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9577fcf2-ea6b-47ef-8d09-c35602fe127a-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.372305 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db7bd1ed-eb83-4617-9479-732e3fdb47a8","Type":"ContainerStarted","Data":"cb0943f60f5a4d684d5832a0424e2cc0a34ca97a2a249d8868d63b39f377b51d"} Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.375024 4888 generic.go:334] "Generic (PLEG): container finished" podID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerID="423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9" exitCode=0 Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.375094 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.375105 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerDied","Data":"423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9"} Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.375142 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9577fcf2-ea6b-47ef-8d09-c35602fe127a","Type":"ContainerDied","Data":"99ce52ed845a2c14577a351e3d020c2237e0b4b1be57e6e15d563c1118dc1e84"} Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.375161 4888 scope.go:117] "RemoveContainer" containerID="8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.402186 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.412810 4888 scope.go:117] "RemoveContainer" containerID="104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.418357 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.437790 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.445553 4888 scope.go:117] "RemoveContainer" containerID="423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.467602 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:21:17 crc kubenswrapper[4888]: E1006 15:21:17.468120 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="ceilometer-notification-agent" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.468140 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="ceilometer-notification-agent" Oct 06 15:21:17 crc kubenswrapper[4888]: E1006 15:21:17.468167 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="proxy-httpd" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.468178 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="proxy-httpd" Oct 06 15:21:17 crc kubenswrapper[4888]: E1006 15:21:17.468218 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="sg-core" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.468227 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="sg-core" Oct 06 15:21:17 crc kubenswrapper[4888]: E1006 15:21:17.468255 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="ceilometer-central-agent" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.468265 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="ceilometer-central-agent" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.468489 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="sg-core" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.468516 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="ceilometer-notification-agent" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.468524 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="proxy-httpd" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.468539 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" containerName="ceilometer-central-agent" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.470489 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.473096 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.473362 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.473496 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.494713 4888 scope.go:117] "RemoveContainer" containerID="03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.517143 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.539787 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.539988 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.540096 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-scripts\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.540143 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmd9\" (UniqueName: \"kubernetes.io/projected/224634c8-9de5-4ab5-a57a-7785afac360c-kube-api-access-bkmd9\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.540220 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-config-data\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.540286 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/224634c8-9de5-4ab5-a57a-7785afac360c-run-httpd\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.540474 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/224634c8-9de5-4ab5-a57a-7785afac360c-log-httpd\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.540499 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.610437 4888 scope.go:117] "RemoveContainer" containerID="8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb" Oct 06 15:21:17 crc kubenswrapper[4888]: E1006 15:21:17.611211 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb\": container with ID starting with 8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb not found: ID does not exist" containerID="8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.611254 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb"} err="failed to get container status \"8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb\": rpc error: code = NotFound desc = could not find container \"8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb\": container with ID starting with 8d49551ddbd6e5d83728a58e35670737753b01fbff326fc9848aa1cfb6de6afb not found: ID does not exist" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.611281 4888 scope.go:117] "RemoveContainer" containerID="104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374" Oct 06 15:21:17 crc kubenswrapper[4888]: E1006 15:21:17.616109 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374\": container with ID starting with 104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374 not found: ID does not exist" containerID="104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.616156 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374"} err="failed to get container status \"104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374\": rpc error: code = NotFound desc = could not find container \"104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374\": container with ID starting with 104c097b727bb7df7e1b8509a59fdefe6fd3d4d0124c45af1b42d117cff33374 not found: ID does not exist" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.616220 4888 scope.go:117] "RemoveContainer" containerID="423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9" Oct 06 15:21:17 crc kubenswrapper[4888]: E1006 15:21:17.620448 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9\": container with ID starting with 423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9 not found: ID does not exist" containerID="423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.620506 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9"} err="failed to get container status \"423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9\": rpc error: code = NotFound desc = could not find container \"423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9\": container with ID starting with 423afb6ca195895321e29ec9f21db205871c74dda0f61c5ed44748ddd87880f9 not found: ID does not exist" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.620535 4888 scope.go:117] "RemoveContainer" containerID="03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3" Oct 06 15:21:17 crc kubenswrapper[4888]: E1006 15:21:17.623429 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3\": container with ID starting with 03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3 not found: ID does not exist" containerID="03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.623471 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3"} err="failed to get container status \"03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3\": rpc error: code = NotFound desc = could not find container \"03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3\": container with ID starting with 03b5edea7b35aa2aad52f94d58f1aeb8399d638f01d455dfc126be60e07f3fb3 not found: ID does not exist" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.634340 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-rr4td"] Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.637735 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.643271 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.643351 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.644086 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/224634c8-9de5-4ab5-a57a-7785afac360c-log-httpd\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.644121 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.644259 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.644312 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.644360 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-scripts\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.644390 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkmd9\" (UniqueName: \"kubernetes.io/projected/224634c8-9de5-4ab5-a57a-7785afac360c-kube-api-access-bkmd9\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.644438 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-config-data\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.644482 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/224634c8-9de5-4ab5-a57a-7785afac360c-run-httpd\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.645140 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/224634c8-9de5-4ab5-a57a-7785afac360c-run-httpd\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.645359 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/224634c8-9de5-4ab5-a57a-7785afac360c-log-httpd\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.649760 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-scripts\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.656559 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.656940 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.669712 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rr4td"] Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.670266 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-config-data\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.672649 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/224634c8-9de5-4ab5-a57a-7785afac360c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.673805 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkmd9\" (UniqueName: \"kubernetes.io/projected/224634c8-9de5-4ab5-a57a-7785afac360c-kube-api-access-bkmd9\") pod \"ceilometer-0\" (UID: \"224634c8-9de5-4ab5-a57a-7785afac360c\") " pod="openstack/ceilometer-0" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.748657 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dj8r\" (UniqueName: \"kubernetes.io/projected/39c32a53-487b-42f5-ba2e-6508521a8cc3-kube-api-access-4dj8r\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.748765 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.748965 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-config-data\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.749020 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-scripts\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.851660 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.852263 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-config-data\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.852361 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-scripts\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.853167 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dj8r\" (UniqueName: \"kubernetes.io/projected/39c32a53-487b-42f5-ba2e-6508521a8cc3-kube-api-access-4dj8r\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.858119 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-scripts\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.858164 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-config-data\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.862965 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.878407 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dj8r\" (UniqueName: \"kubernetes.io/projected/39c32a53-487b-42f5-ba2e-6508521a8cc3-kube-api-access-4dj8r\") pod \"nova-cell1-cell-mapping-rr4td\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:17 crc kubenswrapper[4888]: I1006 15:21:17.900107 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 15:21:18 crc kubenswrapper[4888]: I1006 15:21:18.125856 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:18 crc kubenswrapper[4888]: I1006 15:21:18.414604 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 15:21:18 crc kubenswrapper[4888]: I1006 15:21:18.427406 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db7bd1ed-eb83-4617-9479-732e3fdb47a8","Type":"ContainerStarted","Data":"fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a"} Oct 06 15:21:18 crc kubenswrapper[4888]: I1006 15:21:18.427466 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db7bd1ed-eb83-4617-9479-732e3fdb47a8","Type":"ContainerStarted","Data":"fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea"} Oct 06 15:21:18 crc kubenswrapper[4888]: I1006 15:21:18.462317 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.462297756 podStartE2EDuration="2.462297756s" podCreationTimestamp="2025-10-06 15:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:21:18.454143569 +0000 UTC m=+1218.266494287" watchObservedRunningTime="2025-10-06 15:21:18.462297756 +0000 UTC m=+1218.274648474" Oct 06 15:21:18 crc kubenswrapper[4888]: I1006 15:21:18.605081 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rr4td"] Oct 06 15:21:18 crc kubenswrapper[4888]: I1006 15:21:18.942196 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9577fcf2-ea6b-47ef-8d09-c35602fe127a" path="/var/lib/kubelet/pods/9577fcf2-ea6b-47ef-8d09-c35602fe127a/volumes" Oct 06 15:21:19 crc kubenswrapper[4888]: I1006 15:21:19.442620 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rr4td" event={"ID":"39c32a53-487b-42f5-ba2e-6508521a8cc3","Type":"ContainerStarted","Data":"38ef1c46e9473045ad8a31b398487d8a8ade8f4223ae44a7db0e19f5e93d6e53"} Oct 06 15:21:19 crc kubenswrapper[4888]: I1006 15:21:19.442934 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rr4td" event={"ID":"39c32a53-487b-42f5-ba2e-6508521a8cc3","Type":"ContainerStarted","Data":"a4599e7434bf6fd51051a1d3b68aec26fc95dee5e2ab6de1c60d93e3fe60dc13"} Oct 06 15:21:19 crc kubenswrapper[4888]: I1006 15:21:19.445919 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"224634c8-9de5-4ab5-a57a-7785afac360c","Type":"ContainerStarted","Data":"ee07514699a4c013c79f9cf49baff2cf7c3acd802352f4c2118fa8533be19084"} Oct 06 15:21:19 crc kubenswrapper[4888]: I1006 15:21:19.446039 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"224634c8-9de5-4ab5-a57a-7785afac360c","Type":"ContainerStarted","Data":"6264b84670b59fd3ee9282870c6a705201fe039a2b2505c9d3f984bb2c0dbc1b"} Oct 06 15:21:19 crc kubenswrapper[4888]: I1006 15:21:19.465067 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-rr4td" podStartSLOduration=2.465023275 podStartE2EDuration="2.465023275s" podCreationTimestamp="2025-10-06 15:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:21:19.461963238 +0000 UTC m=+1219.274313956" watchObservedRunningTime="2025-10-06 15:21:19.465023275 +0000 UTC m=+1219.277374003" Oct 06 15:21:19 crc kubenswrapper[4888]: I1006 15:21:19.933207 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.023685 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-qn77m"] Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.026177 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" podUID="4b79ad14-c215-418e-a2d9-a052e1f585bf" containerName="dnsmasq-dns" containerID="cri-o://f24cfbf4a7a581e34d608bfd770e121f2afcc8f65a38adc9d6db74680f429c58" gracePeriod=10 Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.477124 4888 generic.go:334] "Generic (PLEG): container finished" podID="4b79ad14-c215-418e-a2d9-a052e1f585bf" containerID="f24cfbf4a7a581e34d608bfd770e121f2afcc8f65a38adc9d6db74680f429c58" exitCode=0 Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.477252 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" event={"ID":"4b79ad14-c215-418e-a2d9-a052e1f585bf","Type":"ContainerDied","Data":"f24cfbf4a7a581e34d608bfd770e121f2afcc8f65a38adc9d6db74680f429c58"} Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.506038 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"224634c8-9de5-4ab5-a57a-7785afac360c","Type":"ContainerStarted","Data":"92904102837c7bf8f8bb82218b051de1482c6cf09bebc10a9fa49eaf79b54836"} Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.506090 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"224634c8-9de5-4ab5-a57a-7785afac360c","Type":"ContainerStarted","Data":"deadbff4cc097727412579e7d620c82e63b5e07b57e41d80f98739f854d9df7c"} Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.721376 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.821996 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-swift-storage-0\") pod \"4b79ad14-c215-418e-a2d9-a052e1f585bf\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.822101 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcwhx\" (UniqueName: \"kubernetes.io/projected/4b79ad14-c215-418e-a2d9-a052e1f585bf-kube-api-access-zcwhx\") pod \"4b79ad14-c215-418e-a2d9-a052e1f585bf\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.822146 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-sb\") pod \"4b79ad14-c215-418e-a2d9-a052e1f585bf\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.822188 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-nb\") pod \"4b79ad14-c215-418e-a2d9-a052e1f585bf\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.822222 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-config\") pod \"4b79ad14-c215-418e-a2d9-a052e1f585bf\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.822255 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-svc\") pod \"4b79ad14-c215-418e-a2d9-a052e1f585bf\" (UID: \"4b79ad14-c215-418e-a2d9-a052e1f585bf\") " Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.831955 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b79ad14-c215-418e-a2d9-a052e1f585bf-kube-api-access-zcwhx" (OuterVolumeSpecName: "kube-api-access-zcwhx") pod "4b79ad14-c215-418e-a2d9-a052e1f585bf" (UID: "4b79ad14-c215-418e-a2d9-a052e1f585bf"). InnerVolumeSpecName "kube-api-access-zcwhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.897840 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b79ad14-c215-418e-a2d9-a052e1f585bf" (UID: "4b79ad14-c215-418e-a2d9-a052e1f585bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.919848 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b79ad14-c215-418e-a2d9-a052e1f585bf" (UID: "4b79ad14-c215-418e-a2d9-a052e1f585bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.920571 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b79ad14-c215-418e-a2d9-a052e1f585bf" (UID: "4b79ad14-c215-418e-a2d9-a052e1f585bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.923305 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b79ad14-c215-418e-a2d9-a052e1f585bf" (UID: "4b79ad14-c215-418e-a2d9-a052e1f585bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.925452 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.925512 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcwhx\" (UniqueName: \"kubernetes.io/projected/4b79ad14-c215-418e-a2d9-a052e1f585bf-kube-api-access-zcwhx\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.925529 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.925541 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.925552 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:20 crc kubenswrapper[4888]: I1006 15:21:20.951336 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-config" (OuterVolumeSpecName: "config") pod "4b79ad14-c215-418e-a2d9-a052e1f585bf" (UID: "4b79ad14-c215-418e-a2d9-a052e1f585bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:21:21 crc kubenswrapper[4888]: I1006 15:21:21.027734 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b79ad14-c215-418e-a2d9-a052e1f585bf-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:21 crc kubenswrapper[4888]: I1006 15:21:21.516389 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" event={"ID":"4b79ad14-c215-418e-a2d9-a052e1f585bf","Type":"ContainerDied","Data":"5cd40f7a9a386b312af2b274eea9a09ae1831037ed4a8027a7665d50d8c57ded"} Oct 06 15:21:21 crc kubenswrapper[4888]: I1006 15:21:21.516436 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-qn77m" Oct 06 15:21:21 crc kubenswrapper[4888]: I1006 15:21:21.516458 4888 scope.go:117] "RemoveContainer" containerID="f24cfbf4a7a581e34d608bfd770e121f2afcc8f65a38adc9d6db74680f429c58" Oct 06 15:21:21 crc kubenswrapper[4888]: I1006 15:21:21.586967 4888 scope.go:117] "RemoveContainer" containerID="37ed19fe8f63d96fa6ccdd0ee71e1086e5c19abd84dcce0014dbb475c3dca2f1" Oct 06 15:21:21 crc kubenswrapper[4888]: I1006 15:21:21.588424 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-qn77m"] Oct 06 15:21:21 crc kubenswrapper[4888]: I1006 15:21:21.596999 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-qn77m"] Oct 06 15:21:22 crc kubenswrapper[4888]: I1006 15:21:22.528971 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"224634c8-9de5-4ab5-a57a-7785afac360c","Type":"ContainerStarted","Data":"f43e7bf6fb3302bfc1292837594cbefa581d46462d9e986fa4e046162a9c9e5a"} Oct 06 15:21:22 crc kubenswrapper[4888]: I1006 15:21:22.529206 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 15:21:22 crc kubenswrapper[4888]: I1006 15:21:22.561253 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.787568597 podStartE2EDuration="5.561230279s" podCreationTimestamp="2025-10-06 15:21:17 +0000 UTC" firstStartedPulling="2025-10-06 15:21:18.394471747 +0000 UTC m=+1218.206822475" lastFinishedPulling="2025-10-06 15:21:22.168133439 +0000 UTC m=+1221.980484157" observedRunningTime="2025-10-06 15:21:22.550666226 +0000 UTC m=+1222.363016944" watchObservedRunningTime="2025-10-06 15:21:22.561230279 +0000 UTC m=+1222.373580997" Oct 06 15:21:22 crc kubenswrapper[4888]: I1006 15:21:22.936453 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b79ad14-c215-418e-a2d9-a052e1f585bf" path="/var/lib/kubelet/pods/4b79ad14-c215-418e-a2d9-a052e1f585bf/volumes" Oct 06 15:21:25 crc kubenswrapper[4888]: I1006 15:21:25.557254 4888 generic.go:334] "Generic (PLEG): container finished" podID="39c32a53-487b-42f5-ba2e-6508521a8cc3" containerID="38ef1c46e9473045ad8a31b398487d8a8ade8f4223ae44a7db0e19f5e93d6e53" exitCode=0 Oct 06 15:21:25 crc kubenswrapper[4888]: I1006 15:21:25.557338 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rr4td" event={"ID":"39c32a53-487b-42f5-ba2e-6508521a8cc3","Type":"ContainerDied","Data":"38ef1c46e9473045ad8a31b398487d8a8ade8f4223ae44a7db0e19f5e93d6e53"} Oct 06 15:21:26 crc kubenswrapper[4888]: I1006 15:21:26.762625 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 15:21:26 crc kubenswrapper[4888]: I1006 15:21:26.762986 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 15:21:26 crc kubenswrapper[4888]: I1006 15:21:26.935385 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.041190 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dj8r\" (UniqueName: \"kubernetes.io/projected/39c32a53-487b-42f5-ba2e-6508521a8cc3-kube-api-access-4dj8r\") pod \"39c32a53-487b-42f5-ba2e-6508521a8cc3\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.041283 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-scripts\") pod \"39c32a53-487b-42f5-ba2e-6508521a8cc3\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.041323 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-combined-ca-bundle\") pod \"39c32a53-487b-42f5-ba2e-6508521a8cc3\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.041401 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-config-data\") pod \"39c32a53-487b-42f5-ba2e-6508521a8cc3\" (UID: \"39c32a53-487b-42f5-ba2e-6508521a8cc3\") " Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.050937 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c32a53-487b-42f5-ba2e-6508521a8cc3-kube-api-access-4dj8r" (OuterVolumeSpecName: "kube-api-access-4dj8r") pod "39c32a53-487b-42f5-ba2e-6508521a8cc3" (UID: "39c32a53-487b-42f5-ba2e-6508521a8cc3"). InnerVolumeSpecName "kube-api-access-4dj8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.066965 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-scripts" (OuterVolumeSpecName: "scripts") pod "39c32a53-487b-42f5-ba2e-6508521a8cc3" (UID: "39c32a53-487b-42f5-ba2e-6508521a8cc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.078530 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39c32a53-487b-42f5-ba2e-6508521a8cc3" (UID: "39c32a53-487b-42f5-ba2e-6508521a8cc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.080043 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-config-data" (OuterVolumeSpecName: "config-data") pod "39c32a53-487b-42f5-ba2e-6508521a8cc3" (UID: "39c32a53-487b-42f5-ba2e-6508521a8cc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.146373 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dj8r\" (UniqueName: \"kubernetes.io/projected/39c32a53-487b-42f5-ba2e-6508521a8cc3-kube-api-access-4dj8r\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.146423 4888 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.146438 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.146451 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39c32a53-487b-42f5-ba2e-6508521a8cc3-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.578016 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rr4td" event={"ID":"39c32a53-487b-42f5-ba2e-6508521a8cc3","Type":"ContainerDied","Data":"a4599e7434bf6fd51051a1d3b68aec26fc95dee5e2ab6de1c60d93e3fe60dc13"} Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.578708 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4599e7434bf6fd51051a1d3b68aec26fc95dee5e2ab6de1c60d93e3fe60dc13" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.578086 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rr4td" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.773344 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.773980 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-api" containerID="cri-o://fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a" gracePeriod=30 Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.773621 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-log" containerID="cri-o://fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea" gracePeriod=30 Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.784135 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.784145 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.820114 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.820519 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" containerName="nova-scheduler-scheduler" containerID="cri-o://ca337e8a3fd253caee21977972b39d38dec83799f9a0e3bb680510e11da9efca" gracePeriod=30 Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.834217 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.834425 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-log" containerID="cri-o://fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6" gracePeriod=30 Oct 06 15:21:27 crc kubenswrapper[4888]: I1006 15:21:27.834630 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-metadata" containerID="cri-o://3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721" gracePeriod=30 Oct 06 15:21:28 crc kubenswrapper[4888]: I1006 15:21:28.589756 4888 generic.go:334] "Generic (PLEG): container finished" podID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerID="fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea" exitCode=143 Oct 06 15:21:28 crc kubenswrapper[4888]: I1006 15:21:28.589818 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db7bd1ed-eb83-4617-9479-732e3fdb47a8","Type":"ContainerDied","Data":"fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea"} Oct 06 15:21:28 crc kubenswrapper[4888]: I1006 15:21:28.591658 4888 generic.go:334] "Generic (PLEG): container finished" podID="8735866e-24ad-472a-9a6c-c326841f1d30" containerID="fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6" exitCode=143 Oct 06 15:21:28 crc kubenswrapper[4888]: I1006 15:21:28.591682 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8735866e-24ad-472a-9a6c-c326841f1d30","Type":"ContainerDied","Data":"fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6"} Oct 06 15:21:29 crc kubenswrapper[4888]: I1006 15:21:29.603712 4888 generic.go:334] "Generic (PLEG): container finished" podID="26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" containerID="ca337e8a3fd253caee21977972b39d38dec83799f9a0e3bb680510e11da9efca" exitCode=0 Oct 06 15:21:29 crc kubenswrapper[4888]: I1006 15:21:29.603819 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6","Type":"ContainerDied","Data":"ca337e8a3fd253caee21977972b39d38dec83799f9a0e3bb680510e11da9efca"} Oct 06 15:21:29 crc kubenswrapper[4888]: I1006 15:21:29.911863 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.013497 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-config-data\") pod \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.013557 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-combined-ca-bundle\") pod \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.013624 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x994r\" (UniqueName: \"kubernetes.io/projected/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-kube-api-access-x994r\") pod \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\" (UID: \"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6\") " Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.025292 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-kube-api-access-x994r" (OuterVolumeSpecName: "kube-api-access-x994r") pod "26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" (UID: "26576ea5-0f85-4ca1-bf66-11c6f5d8edb6"). InnerVolumeSpecName "kube-api-access-x994r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.046267 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-config-data" (OuterVolumeSpecName: "config-data") pod "26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" (UID: "26576ea5-0f85-4ca1-bf66-11c6f5d8edb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.051148 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" (UID: "26576ea5-0f85-4ca1-bf66-11c6f5d8edb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.115677 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.115723 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.115739 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x994r\" (UniqueName: \"kubernetes.io/projected/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6-kube-api-access-x994r\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.614355 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26576ea5-0f85-4ca1-bf66-11c6f5d8edb6","Type":"ContainerDied","Data":"775d8fa5701ff892bf4bb9d5291cad4969dbbe639c0610a7a806485235f016ce"} Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.614630 4888 scope.go:117] "RemoveContainer" containerID="ca337e8a3fd253caee21977972b39d38dec83799f9a0e3bb680510e11da9efca" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.614386 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.650945 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.672648 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.680956 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:21:30 crc kubenswrapper[4888]: E1006 15:21:30.681414 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" containerName="nova-scheduler-scheduler" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.681443 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" containerName="nova-scheduler-scheduler" Oct 06 15:21:30 crc kubenswrapper[4888]: E1006 15:21:30.681467 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b79ad14-c215-418e-a2d9-a052e1f585bf" containerName="init" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.681476 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b79ad14-c215-418e-a2d9-a052e1f585bf" containerName="init" Oct 06 15:21:30 crc kubenswrapper[4888]: E1006 15:21:30.681498 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c32a53-487b-42f5-ba2e-6508521a8cc3" containerName="nova-manage" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.681505 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c32a53-487b-42f5-ba2e-6508521a8cc3" containerName="nova-manage" Oct 06 15:21:30 crc kubenswrapper[4888]: E1006 15:21:30.681515 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b79ad14-c215-418e-a2d9-a052e1f585bf" containerName="dnsmasq-dns" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.681520 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b79ad14-c215-418e-a2d9-a052e1f585bf" containerName="dnsmasq-dns" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.681707 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b79ad14-c215-418e-a2d9-a052e1f585bf" containerName="dnsmasq-dns" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.681723 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c32a53-487b-42f5-ba2e-6508521a8cc3" containerName="nova-manage" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.681734 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" containerName="nova-scheduler-scheduler" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.682431 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.687763 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.690331 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.726263 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8ff50-0943-4755-875f-80b51c818468-config-data\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.726320 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8ff50-0943-4755-875f-80b51c818468-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.726366 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2mw\" (UniqueName: \"kubernetes.io/projected/3eb8ff50-0943-4755-875f-80b51c818468-kube-api-access-rb2mw\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.828110 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb2mw\" (UniqueName: \"kubernetes.io/projected/3eb8ff50-0943-4755-875f-80b51c818468-kube-api-access-rb2mw\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.828308 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8ff50-0943-4755-875f-80b51c818468-config-data\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.828372 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8ff50-0943-4755-875f-80b51c818468-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.835641 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8ff50-0943-4755-875f-80b51c818468-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.835646 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8ff50-0943-4755-875f-80b51c818468-config-data\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.846138 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb2mw\" (UniqueName: \"kubernetes.io/projected/3eb8ff50-0943-4755-875f-80b51c818468-kube-api-access-rb2mw\") pod \"nova-scheduler-0\" (UID: \"3eb8ff50-0943-4755-875f-80b51c818468\") " pod="openstack/nova-scheduler-0" Oct 06 15:21:30 crc kubenswrapper[4888]: I1006 15:21:30.935676 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26576ea5-0f85-4ca1-bf66-11c6f5d8edb6" path="/var/lib/kubelet/pods/26576ea5-0f85-4ca1-bf66-11c6f5d8edb6/volumes" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.015945 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.289140 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": dial tcp 10.217.0.195:8775: connect: connection refused" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.289931 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": dial tcp 10.217.0.195:8775: connect: connection refused" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.447985 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 15:21:31 crc kubenswrapper[4888]: W1006 15:21:31.472266 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb8ff50_0943_4755_875f_80b51c818468.slice/crio-f96316e3e4dbe1c9683decf560023c63ec41e9429e65322568059001405fa6e5 WatchSource:0}: Error finding container f96316e3e4dbe1c9683decf560023c63ec41e9429e65322568059001405fa6e5: Status 404 returned error can't find the container with id f96316e3e4dbe1c9683decf560023c63ec41e9429e65322568059001405fa6e5 Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.531378 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.632078 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3eb8ff50-0943-4755-875f-80b51c818468","Type":"ContainerStarted","Data":"f96316e3e4dbe1c9683decf560023c63ec41e9429e65322568059001405fa6e5"} Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.636676 4888 generic.go:334] "Generic (PLEG): container finished" podID="8735866e-24ad-472a-9a6c-c326841f1d30" containerID="3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721" exitCode=0 Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.636720 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8735866e-24ad-472a-9a6c-c326841f1d30","Type":"ContainerDied","Data":"3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721"} Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.636744 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8735866e-24ad-472a-9a6c-c326841f1d30","Type":"ContainerDied","Data":"a7f4cf77fee0c7b1090307bc2ebecb4c6f3036fec3720e737cafa31c676f5cc1"} Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.636760 4888 scope.go:117] "RemoveContainer" containerID="3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.636887 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.640519 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-config-data\") pod \"8735866e-24ad-472a-9a6c-c326841f1d30\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.640617 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-combined-ca-bundle\") pod \"8735866e-24ad-472a-9a6c-c326841f1d30\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.640669 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sbxg\" (UniqueName: \"kubernetes.io/projected/8735866e-24ad-472a-9a6c-c326841f1d30-kube-api-access-7sbxg\") pod \"8735866e-24ad-472a-9a6c-c326841f1d30\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.640747 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-nova-metadata-tls-certs\") pod \"8735866e-24ad-472a-9a6c-c326841f1d30\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.640828 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8735866e-24ad-472a-9a6c-c326841f1d30-logs\") pod \"8735866e-24ad-472a-9a6c-c326841f1d30\" (UID: \"8735866e-24ad-472a-9a6c-c326841f1d30\") " Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.641726 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8735866e-24ad-472a-9a6c-c326841f1d30-logs" (OuterVolumeSpecName: "logs") pod "8735866e-24ad-472a-9a6c-c326841f1d30" (UID: "8735866e-24ad-472a-9a6c-c326841f1d30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.648127 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8735866e-24ad-472a-9a6c-c326841f1d30-kube-api-access-7sbxg" (OuterVolumeSpecName: "kube-api-access-7sbxg") pod "8735866e-24ad-472a-9a6c-c326841f1d30" (UID: "8735866e-24ad-472a-9a6c-c326841f1d30"). InnerVolumeSpecName "kube-api-access-7sbxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.679247 4888 scope.go:117] "RemoveContainer" containerID="fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.701452 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-config-data" (OuterVolumeSpecName: "config-data") pod "8735866e-24ad-472a-9a6c-c326841f1d30" (UID: "8735866e-24ad-472a-9a6c-c326841f1d30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.709859 4888 scope.go:117] "RemoveContainer" containerID="3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721" Oct 06 15:21:31 crc kubenswrapper[4888]: E1006 15:21:31.710588 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721\": container with ID starting with 3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721 not found: ID does not exist" containerID="3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.710614 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721"} err="failed to get container status \"3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721\": rpc error: code = NotFound desc = could not find container \"3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721\": container with ID starting with 3af5dd2db0a5199f9433ab20640c64b9fe8b792264a9fa0da250c96bfce6e721 not found: ID does not exist" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.710634 4888 scope.go:117] "RemoveContainer" containerID="fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6" Oct 06 15:21:31 crc kubenswrapper[4888]: E1006 15:21:31.711349 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6\": container with ID starting with fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6 not found: ID does not exist" containerID="fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.711408 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6"} err="failed to get container status \"fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6\": rpc error: code = NotFound desc = could not find container \"fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6\": container with ID starting with fa0a14abc300538e2ec651401902298675de2b7c1e5e2eb85c6d32fe829365b6 not found: ID does not exist" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.722689 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8735866e-24ad-472a-9a6c-c326841f1d30" (UID: "8735866e-24ad-472a-9a6c-c326841f1d30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.742507 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.742537 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.742548 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sbxg\" (UniqueName: \"kubernetes.io/projected/8735866e-24ad-472a-9a6c-c326841f1d30-kube-api-access-7sbxg\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.742557 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8735866e-24ad-472a-9a6c-c326841f1d30-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.747970 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8735866e-24ad-472a-9a6c-c326841f1d30" (UID: "8735866e-24ad-472a-9a6c-c326841f1d30"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.844160 4888 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8735866e-24ad-472a-9a6c-c326841f1d30-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.975639 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:21:31 crc kubenswrapper[4888]: I1006 15:21:31.988832 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.003273 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:21:32 crc kubenswrapper[4888]: E1006 15:21:32.003787 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-log" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.003816 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-log" Oct 06 15:21:32 crc kubenswrapper[4888]: E1006 15:21:32.003837 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-metadata" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.003844 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-metadata" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.004036 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-log" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.004063 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" containerName="nova-metadata-metadata" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.005093 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.011156 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.011308 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.020342 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.046674 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.046737 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.046919 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-config-data\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.046966 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxz5\" (UniqueName: \"kubernetes.io/projected/c4bcd77d-19fd-4f69-8879-906569e3c709-kube-api-access-plxz5\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.047021 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bcd77d-19fd-4f69-8879-906569e3c709-logs\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.149196 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.149261 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.149317 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-config-data\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.149353 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxz5\" (UniqueName: \"kubernetes.io/projected/c4bcd77d-19fd-4f69-8879-906569e3c709-kube-api-access-plxz5\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.149395 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bcd77d-19fd-4f69-8879-906569e3c709-logs\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.150252 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4bcd77d-19fd-4f69-8879-906569e3c709-logs\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.153578 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.153990 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.162006 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4bcd77d-19fd-4f69-8879-906569e3c709-config-data\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.170459 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxz5\" (UniqueName: \"kubernetes.io/projected/c4bcd77d-19fd-4f69-8879-906569e3c709-kube-api-access-plxz5\") pod \"nova-metadata-0\" (UID: \"c4bcd77d-19fd-4f69-8879-906569e3c709\") " pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.326577 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.664844 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3eb8ff50-0943-4755-875f-80b51c818468","Type":"ContainerStarted","Data":"ef829c21d093d26baec81491dd4e89912cad515e23df376ceaffa4ae4d4b58ca"} Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.698563 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.69854242 podStartE2EDuration="2.69854242s" podCreationTimestamp="2025-10-06 15:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:21:32.691722284 +0000 UTC m=+1232.504073002" watchObservedRunningTime="2025-10-06 15:21:32.69854242 +0000 UTC m=+1232.510893148" Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.764228 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 15:21:32 crc kubenswrapper[4888]: W1006 15:21:32.771325 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4bcd77d_19fd_4f69_8879_906569e3c709.slice/crio-4161ec79b4d5c77864320ef2859fff2255bdec477e1de1cd044db8a6ed1af253 WatchSource:0}: Error finding container 4161ec79b4d5c77864320ef2859fff2255bdec477e1de1cd044db8a6ed1af253: Status 404 returned error can't find the container with id 4161ec79b4d5c77864320ef2859fff2255bdec477e1de1cd044db8a6ed1af253 Oct 06 15:21:32 crc kubenswrapper[4888]: I1006 15:21:32.941074 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8735866e-24ad-472a-9a6c-c326841f1d30" path="/var/lib/kubelet/pods/8735866e-24ad-472a-9a6c-c326841f1d30/volumes" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.569648 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.586256 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-combined-ca-bundle\") pod \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.587026 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfbdl\" (UniqueName: \"kubernetes.io/projected/db7bd1ed-eb83-4617-9479-732e3fdb47a8-kube-api-access-jfbdl\") pod \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.587086 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-public-tls-certs\") pod \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.587147 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-internal-tls-certs\") pod \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.587238 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-config-data\") pod \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.587339 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7bd1ed-eb83-4617-9479-732e3fdb47a8-logs\") pod \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\" (UID: \"db7bd1ed-eb83-4617-9479-732e3fdb47a8\") " Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.588528 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7bd1ed-eb83-4617-9479-732e3fdb47a8-logs" (OuterVolumeSpecName: "logs") pod "db7bd1ed-eb83-4617-9479-732e3fdb47a8" (UID: "db7bd1ed-eb83-4617-9479-732e3fdb47a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.596469 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7bd1ed-eb83-4617-9479-732e3fdb47a8-kube-api-access-jfbdl" (OuterVolumeSpecName: "kube-api-access-jfbdl") pod "db7bd1ed-eb83-4617-9479-732e3fdb47a8" (UID: "db7bd1ed-eb83-4617-9479-732e3fdb47a8"). InnerVolumeSpecName "kube-api-access-jfbdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.627420 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db7bd1ed-eb83-4617-9479-732e3fdb47a8" (UID: "db7bd1ed-eb83-4617-9479-732e3fdb47a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.637472 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-config-data" (OuterVolumeSpecName: "config-data") pod "db7bd1ed-eb83-4617-9479-732e3fdb47a8" (UID: "db7bd1ed-eb83-4617-9479-732e3fdb47a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.658186 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db7bd1ed-eb83-4617-9479-732e3fdb47a8" (UID: "db7bd1ed-eb83-4617-9479-732e3fdb47a8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.669462 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "db7bd1ed-eb83-4617-9479-732e3fdb47a8" (UID: "db7bd1ed-eb83-4617-9479-732e3fdb47a8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.674826 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4bcd77d-19fd-4f69-8879-906569e3c709","Type":"ContainerStarted","Data":"95665ad48754c9f371cb4b02548f1f16c824432003e28cbeba7691932113283e"} Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.674878 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4bcd77d-19fd-4f69-8879-906569e3c709","Type":"ContainerStarted","Data":"c04bd9d2088c231a46a38aa80e0239b4ae6baecd7ca707d41489b2b2837fba12"} Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.674894 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4bcd77d-19fd-4f69-8879-906569e3c709","Type":"ContainerStarted","Data":"4161ec79b4d5c77864320ef2859fff2255bdec477e1de1cd044db8a6ed1af253"} Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.676756 4888 generic.go:334] "Generic (PLEG): container finished" podID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerID="fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a" exitCode=0 Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.676994 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.677023 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db7bd1ed-eb83-4617-9479-732e3fdb47a8","Type":"ContainerDied","Data":"fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a"} Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.677166 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"db7bd1ed-eb83-4617-9479-732e3fdb47a8","Type":"ContainerDied","Data":"cb0943f60f5a4d684d5832a0424e2cc0a34ca97a2a249d8868d63b39f377b51d"} Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.677182 4888 scope.go:117] "RemoveContainer" containerID="fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.690430 4888 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7bd1ed-eb83-4617-9479-732e3fdb47a8-logs\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.690461 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.690471 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfbdl\" (UniqueName: \"kubernetes.io/projected/db7bd1ed-eb83-4617-9479-732e3fdb47a8-kube-api-access-jfbdl\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.690479 4888 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.690489 4888 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.690548 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7bd1ed-eb83-4617-9479-732e3fdb47a8-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.697370 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.697353525 podStartE2EDuration="2.697353525s" podCreationTimestamp="2025-10-06 15:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:21:33.696111926 +0000 UTC m=+1233.508462644" watchObservedRunningTime="2025-10-06 15:21:33.697353525 +0000 UTC m=+1233.509704243" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.727737 4888 scope.go:117] "RemoveContainer" containerID="fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.739297 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.755498 4888 scope.go:117] "RemoveContainer" containerID="fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a" Oct 06 15:21:33 crc kubenswrapper[4888]: E1006 15:21:33.756129 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a\": container with ID starting with fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a not found: ID does not exist" containerID="fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.756168 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a"} err="failed to get container status \"fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a\": rpc error: code = NotFound desc = could not find container \"fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a\": container with ID starting with fb85df99f29c580bb51efb5011396d94aa93ab852bf0f9bd008f226b7bfc9a6a not found: ID does not exist" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.756196 4888 scope.go:117] "RemoveContainer" containerID="fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea" Oct 06 15:21:33 crc kubenswrapper[4888]: E1006 15:21:33.756561 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea\": container with ID starting with fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea not found: ID does not exist" containerID="fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.756591 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea"} err="failed to get container status \"fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea\": rpc error: code = NotFound desc = could not find container \"fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea\": container with ID starting with fa2761bc11d0e6fe9c3395d804a8b38c26f670221b4efe4ea09e4d68574e2aea not found: ID does not exist" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.756968 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.770122 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:33 crc kubenswrapper[4888]: E1006 15:21:33.770718 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-api" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.770738 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-api" Oct 06 15:21:33 crc kubenswrapper[4888]: E1006 15:21:33.770765 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-log" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.770777 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-log" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.771016 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-api" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.771038 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" containerName="nova-api-log" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.772210 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.776356 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.776697 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.776960 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.785123 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.792292 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.792347 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-config-data\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.792400 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7kn8\" (UniqueName: \"kubernetes.io/projected/e33f0083-ea10-494e-858d-94ba18279687-kube-api-access-k7kn8\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.792467 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33f0083-ea10-494e-858d-94ba18279687-logs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.792530 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-public-tls-certs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.792573 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.894336 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.894413 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.894514 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-config-data\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.894565 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7kn8\" (UniqueName: \"kubernetes.io/projected/e33f0083-ea10-494e-858d-94ba18279687-kube-api-access-k7kn8\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.894635 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33f0083-ea10-494e-858d-94ba18279687-logs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.894690 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-public-tls-certs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.895405 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e33f0083-ea10-494e-858d-94ba18279687-logs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.897814 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-public-tls-certs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.898625 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.899223 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.902774 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e33f0083-ea10-494e-858d-94ba18279687-config-data\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:33 crc kubenswrapper[4888]: I1006 15:21:33.917890 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7kn8\" (UniqueName: \"kubernetes.io/projected/e33f0083-ea10-494e-858d-94ba18279687-kube-api-access-k7kn8\") pod \"nova-api-0\" (UID: \"e33f0083-ea10-494e-858d-94ba18279687\") " pod="openstack/nova-api-0" Oct 06 15:21:34 crc kubenswrapper[4888]: I1006 15:21:34.100936 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 15:21:34 crc kubenswrapper[4888]: W1006 15:21:34.599720 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode33f0083_ea10_494e_858d_94ba18279687.slice/crio-12b0d7f6641758b48a4b3fd0cf3cf00b4dc165cb4b01b03ba78429e7ef2601a5 WatchSource:0}: Error finding container 12b0d7f6641758b48a4b3fd0cf3cf00b4dc165cb4b01b03ba78429e7ef2601a5: Status 404 returned error can't find the container with id 12b0d7f6641758b48a4b3fd0cf3cf00b4dc165cb4b01b03ba78429e7ef2601a5 Oct 06 15:21:34 crc kubenswrapper[4888]: I1006 15:21:34.600494 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 15:21:34 crc kubenswrapper[4888]: I1006 15:21:34.691943 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e33f0083-ea10-494e-858d-94ba18279687","Type":"ContainerStarted","Data":"12b0d7f6641758b48a4b3fd0cf3cf00b4dc165cb4b01b03ba78429e7ef2601a5"} Oct 06 15:21:34 crc kubenswrapper[4888]: I1006 15:21:34.939023 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7bd1ed-eb83-4617-9479-732e3fdb47a8" path="/var/lib/kubelet/pods/db7bd1ed-eb83-4617-9479-732e3fdb47a8/volumes" Oct 06 15:21:35 crc kubenswrapper[4888]: I1006 15:21:35.704953 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e33f0083-ea10-494e-858d-94ba18279687","Type":"ContainerStarted","Data":"a9a21cafaa1fc4d36aeda731461c2db28ccb27e5a429d9af0e9956b659f853d3"} Oct 06 15:21:35 crc kubenswrapper[4888]: I1006 15:21:35.705286 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e33f0083-ea10-494e-858d-94ba18279687","Type":"ContainerStarted","Data":"b076c4738833fa984b910ef3ced2f3934474fcfc920793b33a5bb28f9aa55458"} Oct 06 15:21:36 crc kubenswrapper[4888]: I1006 15:21:36.016673 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 15:21:37 crc kubenswrapper[4888]: I1006 15:21:37.327439 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 15:21:37 crc kubenswrapper[4888]: I1006 15:21:37.328915 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 15:21:41 crc kubenswrapper[4888]: I1006 15:21:41.017253 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 15:21:41 crc kubenswrapper[4888]: I1006 15:21:41.046241 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 15:21:41 crc kubenswrapper[4888]: I1006 15:21:41.068360 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=8.068339147 podStartE2EDuration="8.068339147s" podCreationTimestamp="2025-10-06 15:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:21:35.730071483 +0000 UTC m=+1235.542422211" watchObservedRunningTime="2025-10-06 15:21:41.068339147 +0000 UTC m=+1240.880689855" Oct 06 15:21:41 crc kubenswrapper[4888]: I1006 15:21:41.793458 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 15:21:42 crc kubenswrapper[4888]: I1006 15:21:42.328287 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 15:21:42 crc kubenswrapper[4888]: I1006 15:21:42.328347 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 15:21:43 crc kubenswrapper[4888]: I1006 15:21:43.345120 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c4bcd77d-19fd-4f69-8879-906569e3c709" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:21:43 crc kubenswrapper[4888]: I1006 15:21:43.345818 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c4bcd77d-19fd-4f69-8879-906569e3c709" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:21:44 crc kubenswrapper[4888]: I1006 15:21:44.103166 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 15:21:44 crc kubenswrapper[4888]: I1006 15:21:44.103241 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 15:21:45 crc kubenswrapper[4888]: I1006 15:21:45.115998 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e33f0083-ea10-494e-858d-94ba18279687" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:21:45 crc kubenswrapper[4888]: I1006 15:21:45.116239 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e33f0083-ea10-494e-858d-94ba18279687" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 15:21:47 crc kubenswrapper[4888]: I1006 15:21:47.920995 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 15:21:52 crc kubenswrapper[4888]: I1006 15:21:52.333209 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 15:21:52 crc kubenswrapper[4888]: I1006 15:21:52.335095 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 15:21:52 crc kubenswrapper[4888]: I1006 15:21:52.343053 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 15:21:52 crc kubenswrapper[4888]: I1006 15:21:52.869025 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 15:21:54 crc kubenswrapper[4888]: I1006 15:21:54.108698 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 15:21:54 crc kubenswrapper[4888]: I1006 15:21:54.109415 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 15:21:54 crc kubenswrapper[4888]: I1006 15:21:54.110780 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 15:21:54 crc kubenswrapper[4888]: I1006 15:21:54.120636 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 15:21:54 crc kubenswrapper[4888]: I1006 15:21:54.882584 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 15:21:54 crc kubenswrapper[4888]: I1006 15:21:54.890631 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 15:22:02 crc kubenswrapper[4888]: I1006 15:22:02.563608 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:22:02 crc kubenswrapper[4888]: I1006 15:22:02.564229 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:22:02 crc kubenswrapper[4888]: I1006 15:22:02.917833 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:22:04 crc kubenswrapper[4888]: I1006 15:22:04.090212 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:22:08 crc kubenswrapper[4888]: I1006 15:22:08.480853 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" containerName="rabbitmq" containerID="cri-o://005d67fa0a200c095eb5b5921d7bcd67f0554d73647cd131d09c8afe13332788" gracePeriod=604795 Oct 06 15:22:08 crc kubenswrapper[4888]: I1006 15:22:08.828333 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerName="rabbitmq" containerID="cri-o://35e18882845af9a833ef7e3280ef021b3add404135d569827de8904efe2282ca" gracePeriod=604796 Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.058790 4888 generic.go:334] "Generic (PLEG): container finished" podID="91ed3909-71e7-40e7-9e97-e9917d621080" containerID="005d67fa0a200c095eb5b5921d7bcd67f0554d73647cd131d09c8afe13332788" exitCode=0 Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.059012 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ed3909-71e7-40e7-9e97-e9917d621080","Type":"ContainerDied","Data":"005d67fa0a200c095eb5b5921d7bcd67f0554d73647cd131d09c8afe13332788"} Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.059071 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ed3909-71e7-40e7-9e97-e9917d621080","Type":"ContainerDied","Data":"e22186ba0d6ff3d423cd0671c6928569f65cf704b0b4c540357b44984871aeb9"} Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.059088 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22186ba0d6ff3d423cd0671c6928569f65cf704b0b4c540357b44984871aeb9" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.061553 4888 generic.go:334] "Generic (PLEG): container finished" podID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerID="35e18882845af9a833ef7e3280ef021b3add404135d569827de8904efe2282ca" exitCode=0 Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.061589 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44ccc0c-19ed-4959-ac2c-46842cd27fc1","Type":"ContainerDied","Data":"35e18882845af9a833ef7e3280ef021b3add404135d569827de8904efe2282ca"} Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.108585 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.190599 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ed3909-71e7-40e7-9e97-e9917d621080-pod-info\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.190662 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbd2x\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-kube-api-access-hbd2x\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.190714 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-tls\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.190757 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ed3909-71e7-40e7-9e97-e9917d621080-erlang-cookie-secret\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.190926 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.191068 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-erlang-cookie\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.191116 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-plugins\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.191170 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-confd\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.191194 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-server-conf\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.191227 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-plugins-conf\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.191249 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-config-data\") pod \"91ed3909-71e7-40e7-9e97-e9917d621080\" (UID: \"91ed3909-71e7-40e7-9e97-e9917d621080\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.191860 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.191985 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.192007 4888 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.192441 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.202192 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/91ed3909-71e7-40e7-9e97-e9917d621080-pod-info" (OuterVolumeSpecName: "pod-info") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.202482 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.211388 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ed3909-71e7-40e7-9e97-e9917d621080-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.215325 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-kube-api-access-hbd2x" (OuterVolumeSpecName: "kube-api-access-hbd2x") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "kube-api-access-hbd2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.215408 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.274456 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-server-conf" (OuterVolumeSpecName: "server-conf") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.294456 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-config-data" (OuterVolumeSpecName: "config-data") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295220 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbd2x\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-kube-api-access-hbd2x\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295244 4888 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295255 4888 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ed3909-71e7-40e7-9e97-e9917d621080-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295297 4888 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295309 4888 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295317 4888 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-server-conf\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295324 4888 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295339 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91ed3909-71e7-40e7-9e97-e9917d621080-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.295347 4888 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ed3909-71e7-40e7-9e97-e9917d621080-pod-info\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.316017 4888 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.394015 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396086 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-plugins-conf\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396149 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-server-conf\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396179 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-erlang-cookie-secret\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396203 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-config-data\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396235 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-tls\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396264 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-plugins\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396291 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396331 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-erlang-cookie\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396378 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-confd\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396415 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-pod-info\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396445 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc827\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-kube-api-access-xc827\") pod \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\" (UID: \"f44ccc0c-19ed-4959-ac2c-46842cd27fc1\") " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.396690 4888 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.403747 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.407612 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.407886 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-kube-api-access-xc827" (OuterVolumeSpecName: "kube-api-access-xc827") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "kube-api-access-xc827". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.408751 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.410296 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.410770 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.413940 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "91ed3909-71e7-40e7-9e97-e9917d621080" (UID: "91ed3909-71e7-40e7-9e97-e9917d621080"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.416889 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-pod-info" (OuterVolumeSpecName: "pod-info") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.460013 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.506141 4888 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.506940 4888 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.507026 4888 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.507112 4888 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.507178 4888 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.507253 4888 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ed3909-71e7-40e7-9e97-e9917d621080-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.507341 4888 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-pod-info\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.507432 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc827\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-kube-api-access-xc827\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.507521 4888 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.510757 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-config-data" (OuterVolumeSpecName: "config-data") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.525404 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-server-conf" (OuterVolumeSpecName: "server-conf") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.562895 4888 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.610196 4888 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-server-conf\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.610247 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.610261 4888 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.611022 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f44ccc0c-19ed-4959-ac2c-46842cd27fc1" (UID: "f44ccc0c-19ed-4959-ac2c-46842cd27fc1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:22:15 crc kubenswrapper[4888]: I1006 15:22:15.712123 4888 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f44ccc0c-19ed-4959-ac2c-46842cd27fc1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.073329 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.073358 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.073336 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44ccc0c-19ed-4959-ac2c-46842cd27fc1","Type":"ContainerDied","Data":"57cf5f2d5a2b44ef0ee059f8db3e6b4a80cb0908a99f484199bbdaed25471d64"} Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.074292 4888 scope.go:117] "RemoveContainer" containerID="35e18882845af9a833ef7e3280ef021b3add404135d569827de8904efe2282ca" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.124264 4888 scope.go:117] "RemoveContainer" containerID="bb989701414b929a31612dd68136f8326ca10cd6168097c538102b30298af31e" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.155336 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.185037 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.210306 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.256318 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.302857 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:22:16 crc kubenswrapper[4888]: E1006 15:22:16.303478 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" containerName="setup-container" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.303492 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" containerName="setup-container" Oct 06 15:22:16 crc kubenswrapper[4888]: E1006 15:22:16.303510 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerName="rabbitmq" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.303518 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerName="rabbitmq" Oct 06 15:22:16 crc kubenswrapper[4888]: E1006 15:22:16.303536 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerName="setup-container" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.303542 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerName="setup-container" Oct 06 15:22:16 crc kubenswrapper[4888]: E1006 15:22:16.303552 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" containerName="rabbitmq" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.303557 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" containerName="rabbitmq" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.303751 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" containerName="rabbitmq" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.303763 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerName="rabbitmq" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.304718 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.312276 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.312699 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.312954 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.313239 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.313415 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.313601 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-b8dpc" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.314276 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.329267 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.331039 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.334652 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.334875 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.335816 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.336060 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.340183 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.340359 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.340527 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m5r5h" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.351188 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.402854 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.433131 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.433448 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-config-data\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.433562 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80df2079-12d2-4a47-837c-69d8f26209a2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.433690 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.433786 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.433947 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fxw\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-kube-api-access-g7fxw\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.434048 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.434223 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.434356 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.434461 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.434539 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80df2079-12d2-4a47-837c-69d8f26209a2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.536199 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80df2079-12d2-4a47-837c-69d8f26209a2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.536455 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.536557 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.536682 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.536833 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7fxw\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-kube-api-access-g7fxw\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.536928 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537016 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537120 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537216 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537350 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537515 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537627 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537728 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537857 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.537955 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80df2079-12d2-4a47-837c-69d8f26209a2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.538053 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.538169 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.538270 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.538371 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.538450 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.538469 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.538610 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4pvl\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-kube-api-access-j4pvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.538674 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-config-data\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.539163 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.540102 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-config-data\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.540418 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.540546 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.541288 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/80df2079-12d2-4a47-837c-69d8f26209a2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.545848 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/80df2079-12d2-4a47-837c-69d8f26209a2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.546696 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/80df2079-12d2-4a47-837c-69d8f26209a2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.550131 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.551907 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.574688 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7fxw\" (UniqueName: \"kubernetes.io/projected/80df2079-12d2-4a47-837c-69d8f26209a2-kube-api-access-g7fxw\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.591864 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"80df2079-12d2-4a47-837c-69d8f26209a2\") " pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.640978 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.641236 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.641382 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.641485 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.641584 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4pvl\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-kube-api-access-j4pvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.641711 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.641877 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.642006 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.642100 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.642273 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.642369 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.642637 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.643125 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.643434 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.643592 4888 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.646968 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.647002 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.647310 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.648702 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.648861 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.649451 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.652317 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.667386 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4pvl\" (UniqueName: \"kubernetes.io/projected/a6bbff15-928d-43f2-8a4b-0c0ee40d73a5-kube-api-access-j4pvl\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.690540 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.933608 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ed3909-71e7-40e7-9e97-e9917d621080" path="/var/lib/kubelet/pods/91ed3909-71e7-40e7-9e97-e9917d621080/volumes" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.935251 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" path="/var/lib/kubelet/pods/f44ccc0c-19ed-4959-ac2c-46842cd27fc1/volumes" Oct 06 15:22:16 crc kubenswrapper[4888]: I1006 15:22:16.977901 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.344093 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.396101 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.430928 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-wbgrj"] Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.439510 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.444956 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.459079 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-wbgrj"] Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.592369 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.592416 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.592518 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.592879 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-svc\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.592989 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.593026 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-config\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.593059 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthwj\" (UniqueName: \"kubernetes.io/projected/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-kube-api-access-gthwj\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.695061 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.695398 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.695452 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.695513 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-svc\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.695541 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.695571 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-config\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.695601 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthwj\" (UniqueName: \"kubernetes.io/projected/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-kube-api-access-gthwj\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.696114 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.696366 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.696453 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.696475 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.696483 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-config\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.696744 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-svc\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.716423 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthwj\" (UniqueName: \"kubernetes.io/projected/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-kube-api-access-gthwj\") pod \"dnsmasq-dns-67b789f86c-wbgrj\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:17 crc kubenswrapper[4888]: I1006 15:22:17.777495 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:18 crc kubenswrapper[4888]: I1006 15:22:18.105201 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5","Type":"ContainerStarted","Data":"dec0c25ea7647da79c512c4ecb212eac6bc23cd09945e1ebc1d0b2f28d0f006e"} Oct 06 15:22:18 crc kubenswrapper[4888]: I1006 15:22:18.106397 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80df2079-12d2-4a47-837c-69d8f26209a2","Type":"ContainerStarted","Data":"00bdf03fddae46ca7c452dc122da3721015123f47ef7d76c7a8f81336c793f1f"} Oct 06 15:22:18 crc kubenswrapper[4888]: I1006 15:22:18.272114 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-wbgrj"] Oct 06 15:22:19 crc kubenswrapper[4888]: I1006 15:22:19.116873 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5","Type":"ContainerStarted","Data":"f3a282d8915a59693e542823a739258df90841f35eb18e2bd651d65303dc9732"} Oct 06 15:22:19 crc kubenswrapper[4888]: I1006 15:22:19.121488 4888 generic.go:334] "Generic (PLEG): container finished" podID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" containerID="f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760" exitCode=0 Oct 06 15:22:19 crc kubenswrapper[4888]: I1006 15:22:19.121548 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" event={"ID":"bd0a50ca-5301-46f9-b2f9-6a2a682973ff","Type":"ContainerDied","Data":"f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760"} Oct 06 15:22:19 crc kubenswrapper[4888]: I1006 15:22:19.121571 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" event={"ID":"bd0a50ca-5301-46f9-b2f9-6a2a682973ff","Type":"ContainerStarted","Data":"8428120c0522fada0aabd35f445a83f96034c1711c296869e1eab03af7c10b0b"} Oct 06 15:22:19 crc kubenswrapper[4888]: I1006 15:22:19.130999 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80df2079-12d2-4a47-837c-69d8f26209a2","Type":"ContainerStarted","Data":"ee970ee880c18b9d97904d555a26a69697f96527137cb6bf0131644728af2908"} Oct 06 15:22:20 crc kubenswrapper[4888]: I1006 15:22:20.141662 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" event={"ID":"bd0a50ca-5301-46f9-b2f9-6a2a682973ff","Type":"ContainerStarted","Data":"5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310"} Oct 06 15:22:20 crc kubenswrapper[4888]: I1006 15:22:20.165547 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" podStartSLOduration=3.165526089 podStartE2EDuration="3.165526089s" podCreationTimestamp="2025-10-06 15:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:22:20.159109277 +0000 UTC m=+1279.971459995" watchObservedRunningTime="2025-10-06 15:22:20.165526089 +0000 UTC m=+1279.977876807" Oct 06 15:22:20 crc kubenswrapper[4888]: I1006 15:22:20.258690 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f44ccc0c-19ed-4959-ac2c-46842cd27fc1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: i/o timeout" Oct 06 15:22:21 crc kubenswrapper[4888]: I1006 15:22:21.150228 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:27 crc kubenswrapper[4888]: I1006 15:22:27.780048 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:27 crc kubenswrapper[4888]: I1006 15:22:27.860997 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-9vj7k"] Oct 06 15:22:27 crc kubenswrapper[4888]: I1006 15:22:27.861236 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" podUID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" containerName="dnsmasq-dns" containerID="cri-o://916ddaaec0160c5db2506143914fec08b2c5a9cc60f1e0734b1666717ad31679" gracePeriod=10 Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.089582 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79dc84bdb7-w2hd2"] Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.091950 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.142501 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79dc84bdb7-w2hd2"] Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.196017 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-ovsdbserver-sb\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.196118 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-dns-swift-storage-0\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.196158 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-ovsdbserver-nb\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.196197 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-config\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.196259 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-openstack-edpm-ipam\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.196280 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-dns-svc\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.196327 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kr2w\" (UniqueName: \"kubernetes.io/projected/4b78ed62-f8e2-4e9a-8517-e1005c50b536-kube-api-access-8kr2w\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.224996 4888 generic.go:334] "Generic (PLEG): container finished" podID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" containerID="916ddaaec0160c5db2506143914fec08b2c5a9cc60f1e0734b1666717ad31679" exitCode=0 Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.225055 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" event={"ID":"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc","Type":"ContainerDied","Data":"916ddaaec0160c5db2506143914fec08b2c5a9cc60f1e0734b1666717ad31679"} Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.303404 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-ovsdbserver-nb\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.303499 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-config\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.303638 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-openstack-edpm-ipam\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.303673 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-dns-svc\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.304060 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kr2w\" (UniqueName: \"kubernetes.io/projected/4b78ed62-f8e2-4e9a-8517-e1005c50b536-kube-api-access-8kr2w\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.304174 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-ovsdbserver-sb\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.304235 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-dns-swift-storage-0\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.304717 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-ovsdbserver-nb\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.305099 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-config\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.305425 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-dns-svc\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.306098 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-ovsdbserver-sb\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.306345 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-openstack-edpm-ipam\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.306631 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b78ed62-f8e2-4e9a-8517-e1005c50b536-dns-swift-storage-0\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.338093 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kr2w\" (UniqueName: \"kubernetes.io/projected/4b78ed62-f8e2-4e9a-8517-e1005c50b536-kube-api-access-8kr2w\") pod \"dnsmasq-dns-79dc84bdb7-w2hd2\" (UID: \"4b78ed62-f8e2-4e9a-8517-e1005c50b536\") " pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.438917 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.644692 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.741200 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-nb\") pod \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.741607 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-config\") pod \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.741686 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpn7f\" (UniqueName: \"kubernetes.io/projected/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-kube-api-access-kpn7f\") pod \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.741748 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-sb\") pod \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.741834 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-svc\") pod \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.741880 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-swift-storage-0\") pod \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\" (UID: \"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc\") " Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.752104 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-kube-api-access-kpn7f" (OuterVolumeSpecName: "kube-api-access-kpn7f") pod "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" (UID: "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc"). InnerVolumeSpecName "kube-api-access-kpn7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.848197 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpn7f\" (UniqueName: \"kubernetes.io/projected/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-kube-api-access-kpn7f\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.862175 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" (UID: "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.869411 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" (UID: "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.886835 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" (UID: "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.914460 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" (UID: "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.924288 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-config" (OuterVolumeSpecName: "config") pod "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" (UID: "d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.950995 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.951097 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.951113 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.951125 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:28 crc kubenswrapper[4888]: I1006 15:22:28.951137 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:29 crc kubenswrapper[4888]: I1006 15:22:29.111631 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79dc84bdb7-w2hd2"] Oct 06 15:22:29 crc kubenswrapper[4888]: W1006 15:22:29.151472 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b78ed62_f8e2_4e9a_8517_e1005c50b536.slice/crio-3dac0693ca4d97af1f4bad4ac5c71de37cea0d4e7ee9323673cfd197f9e34e7c WatchSource:0}: Error finding container 3dac0693ca4d97af1f4bad4ac5c71de37cea0d4e7ee9323673cfd197f9e34e7c: Status 404 returned error can't find the container with id 3dac0693ca4d97af1f4bad4ac5c71de37cea0d4e7ee9323673cfd197f9e34e7c Oct 06 15:22:29 crc kubenswrapper[4888]: I1006 15:22:29.247079 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" event={"ID":"4b78ed62-f8e2-4e9a-8517-e1005c50b536","Type":"ContainerStarted","Data":"3dac0693ca4d97af1f4bad4ac5c71de37cea0d4e7ee9323673cfd197f9e34e7c"} Oct 06 15:22:29 crc kubenswrapper[4888]: I1006 15:22:29.250487 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" event={"ID":"d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc","Type":"ContainerDied","Data":"6ab95e6cedad6302cf1d3df9e3095b986837be719073bc90c3cba68d8a1171ce"} Oct 06 15:22:29 crc kubenswrapper[4888]: I1006 15:22:29.251299 4888 scope.go:117] "RemoveContainer" containerID="916ddaaec0160c5db2506143914fec08b2c5a9cc60f1e0734b1666717ad31679" Oct 06 15:22:29 crc kubenswrapper[4888]: I1006 15:22:29.251661 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-9vj7k" Oct 06 15:22:29 crc kubenswrapper[4888]: I1006 15:22:29.294948 4888 scope.go:117] "RemoveContainer" containerID="a8ffb63741cd0ec347b6f83d7ea1125802e8c685c501747fb8567685a0d361ec" Oct 06 15:22:29 crc kubenswrapper[4888]: I1006 15:22:29.311864 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-9vj7k"] Oct 06 15:22:29 crc kubenswrapper[4888]: I1006 15:22:29.336513 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-9vj7k"] Oct 06 15:22:30 crc kubenswrapper[4888]: I1006 15:22:30.261064 4888 generic.go:334] "Generic (PLEG): container finished" podID="4b78ed62-f8e2-4e9a-8517-e1005c50b536" containerID="759449c055b1091b899c2341b9d1c51c7c4c38bf6bbea6fd8253fd76b4fb8a1b" exitCode=0 Oct 06 15:22:30 crc kubenswrapper[4888]: I1006 15:22:30.261390 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" event={"ID":"4b78ed62-f8e2-4e9a-8517-e1005c50b536","Type":"ContainerDied","Data":"759449c055b1091b899c2341b9d1c51c7c4c38bf6bbea6fd8253fd76b4fb8a1b"} Oct 06 15:22:30 crc kubenswrapper[4888]: I1006 15:22:30.935053 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" path="/var/lib/kubelet/pods/d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc/volumes" Oct 06 15:22:31 crc kubenswrapper[4888]: I1006 15:22:31.280834 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" event={"ID":"4b78ed62-f8e2-4e9a-8517-e1005c50b536","Type":"ContainerStarted","Data":"668ee3ac7e1b4dab52a0a46223589b3d4d4336dba446725256602b34eff38855"} Oct 06 15:22:31 crc kubenswrapper[4888]: I1006 15:22:31.318662 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" podStartSLOduration=4.318643292 podStartE2EDuration="4.318643292s" podCreationTimestamp="2025-10-06 15:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:22:31.314007846 +0000 UTC m=+1291.126358564" watchObservedRunningTime="2025-10-06 15:22:31.318643292 +0000 UTC m=+1291.130994000" Oct 06 15:22:32 crc kubenswrapper[4888]: I1006 15:22:32.289916 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:32 crc kubenswrapper[4888]: I1006 15:22:32.564471 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:22:32 crc kubenswrapper[4888]: I1006 15:22:32.564537 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:22:38 crc kubenswrapper[4888]: I1006 15:22:38.439993 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79dc84bdb7-w2hd2" Oct 06 15:22:38 crc kubenswrapper[4888]: I1006 15:22:38.562707 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-wbgrj"] Oct 06 15:22:38 crc kubenswrapper[4888]: I1006 15:22:38.564430 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" podUID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" containerName="dnsmasq-dns" containerID="cri-o://5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310" gracePeriod=10 Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.096059 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.184610 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-config\") pod \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.184683 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gthwj\" (UniqueName: \"kubernetes.io/projected/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-kube-api-access-gthwj\") pod \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.184785 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-sb\") pod \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.185448 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-nb\") pod \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.185618 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-openstack-edpm-ipam\") pod \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.185716 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-svc\") pod \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.185760 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-swift-storage-0\") pod \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\" (UID: \"bd0a50ca-5301-46f9-b2f9-6a2a682973ff\") " Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.192486 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-kube-api-access-gthwj" (OuterVolumeSpecName: "kube-api-access-gthwj") pod "bd0a50ca-5301-46f9-b2f9-6a2a682973ff" (UID: "bd0a50ca-5301-46f9-b2f9-6a2a682973ff"). InnerVolumeSpecName "kube-api-access-gthwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.249096 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd0a50ca-5301-46f9-b2f9-6a2a682973ff" (UID: "bd0a50ca-5301-46f9-b2f9-6a2a682973ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.252028 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd0a50ca-5301-46f9-b2f9-6a2a682973ff" (UID: "bd0a50ca-5301-46f9-b2f9-6a2a682973ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.259906 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd0a50ca-5301-46f9-b2f9-6a2a682973ff" (UID: "bd0a50ca-5301-46f9-b2f9-6a2a682973ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.271861 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd0a50ca-5301-46f9-b2f9-6a2a682973ff" (UID: "bd0a50ca-5301-46f9-b2f9-6a2a682973ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.277932 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-config" (OuterVolumeSpecName: "config") pod "bd0a50ca-5301-46f9-b2f9-6a2a682973ff" (UID: "bd0a50ca-5301-46f9-b2f9-6a2a682973ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.279649 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "bd0a50ca-5301-46f9-b2f9-6a2a682973ff" (UID: "bd0a50ca-5301-46f9-b2f9-6a2a682973ff"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.288772 4888 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.288884 4888 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.288910 4888 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-config\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.288924 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gthwj\" (UniqueName: \"kubernetes.io/projected/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-kube-api-access-gthwj\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.288935 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.288952 4888 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.288997 4888 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bd0a50ca-5301-46f9-b2f9-6a2a682973ff-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.372254 4888 generic.go:334] "Generic (PLEG): container finished" podID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" containerID="5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310" exitCode=0 Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.372309 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" event={"ID":"bd0a50ca-5301-46f9-b2f9-6a2a682973ff","Type":"ContainerDied","Data":"5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310"} Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.372356 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" event={"ID":"bd0a50ca-5301-46f9-b2f9-6a2a682973ff","Type":"ContainerDied","Data":"8428120c0522fada0aabd35f445a83f96034c1711c296869e1eab03af7c10b0b"} Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.372378 4888 scope.go:117] "RemoveContainer" containerID="5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.372806 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-wbgrj" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.411816 4888 scope.go:117] "RemoveContainer" containerID="f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.416850 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-wbgrj"] Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.426428 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-wbgrj"] Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.438134 4888 scope.go:117] "RemoveContainer" containerID="5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310" Oct 06 15:22:39 crc kubenswrapper[4888]: E1006 15:22:39.438680 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310\": container with ID starting with 5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310 not found: ID does not exist" containerID="5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.438735 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310"} err="failed to get container status \"5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310\": rpc error: code = NotFound desc = could not find container \"5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310\": container with ID starting with 5a5a50d8e61e8c66fb32b4e5570e60f5b8f9f253e2547d511c7987ffdf64e310 not found: ID does not exist" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.438764 4888 scope.go:117] "RemoveContainer" containerID="f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760" Oct 06 15:22:39 crc kubenswrapper[4888]: E1006 15:22:39.439328 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760\": container with ID starting with f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760 not found: ID does not exist" containerID="f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760" Oct 06 15:22:39 crc kubenswrapper[4888]: I1006 15:22:39.439367 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760"} err="failed to get container status \"f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760\": rpc error: code = NotFound desc = could not find container \"f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760\": container with ID starting with f8209890692529f952fba5c380ff272558c7282049e2b38d17b8bba8dfc1b760 not found: ID does not exist" Oct 06 15:22:40 crc kubenswrapper[4888]: I1006 15:22:40.932941 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" path="/var/lib/kubelet/pods/bd0a50ca-5301-46f9-b2f9-6a2a682973ff/volumes" Oct 06 15:22:51 crc kubenswrapper[4888]: I1006 15:22:51.482825 4888 generic.go:334] "Generic (PLEG): container finished" podID="a6bbff15-928d-43f2-8a4b-0c0ee40d73a5" containerID="f3a282d8915a59693e542823a739258df90841f35eb18e2bd651d65303dc9732" exitCode=0 Oct 06 15:22:51 crc kubenswrapper[4888]: I1006 15:22:51.483079 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5","Type":"ContainerDied","Data":"f3a282d8915a59693e542823a739258df90841f35eb18e2bd651d65303dc9732"} Oct 06 15:22:51 crc kubenswrapper[4888]: I1006 15:22:51.489013 4888 generic.go:334] "Generic (PLEG): container finished" podID="80df2079-12d2-4a47-837c-69d8f26209a2" containerID="ee970ee880c18b9d97904d555a26a69697f96527137cb6bf0131644728af2908" exitCode=0 Oct 06 15:22:51 crc kubenswrapper[4888]: I1006 15:22:51.489067 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80df2079-12d2-4a47-837c-69d8f26209a2","Type":"ContainerDied","Data":"ee970ee880c18b9d97904d555a26a69697f96527137cb6bf0131644728af2908"} Oct 06 15:22:52 crc kubenswrapper[4888]: I1006 15:22:52.505584 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"80df2079-12d2-4a47-837c-69d8f26209a2","Type":"ContainerStarted","Data":"7fdc2214bea895caf82a517bf4696e11b054b920d2e236bd9c457ac9178c01ba"} Oct 06 15:22:52 crc kubenswrapper[4888]: I1006 15:22:52.507179 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 06 15:22:52 crc kubenswrapper[4888]: I1006 15:22:52.511885 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a6bbff15-928d-43f2-8a4b-0c0ee40d73a5","Type":"ContainerStarted","Data":"89735920b412576cc46951ab2f740d2051dc51cd2cae76d8e641a3313a88791e"} Oct 06 15:22:52 crc kubenswrapper[4888]: I1006 15:22:52.512662 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:22:52 crc kubenswrapper[4888]: I1006 15:22:52.530080 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.530060326 podStartE2EDuration="36.530060326s" podCreationTimestamp="2025-10-06 15:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:22:52.528858418 +0000 UTC m=+1312.341209146" watchObservedRunningTime="2025-10-06 15:22:52.530060326 +0000 UTC m=+1312.342411044" Oct 06 15:22:52 crc kubenswrapper[4888]: I1006 15:22:52.564679 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.564660223 podStartE2EDuration="36.564660223s" podCreationTimestamp="2025-10-06 15:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:22:52.558014518 +0000 UTC m=+1312.370365226" watchObservedRunningTime="2025-10-06 15:22:52.564660223 +0000 UTC m=+1312.377010941" Oct 06 15:23:02 crc kubenswrapper[4888]: I1006 15:23:02.563623 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:23:02 crc kubenswrapper[4888]: I1006 15:23:02.564377 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:23:02 crc kubenswrapper[4888]: I1006 15:23:02.564423 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:23:02 crc kubenswrapper[4888]: I1006 15:23:02.565176 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"899ff317fa3e35b62f992c01acf0a8a3a07cd1a84a2e8f800bd40f108362572b"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:23:02 crc kubenswrapper[4888]: I1006 15:23:02.565234 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://899ff317fa3e35b62f992c01acf0a8a3a07cd1a84a2e8f800bd40f108362572b" gracePeriod=600 Oct 06 15:23:03 crc kubenswrapper[4888]: I1006 15:23:03.609198 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="899ff317fa3e35b62f992c01acf0a8a3a07cd1a84a2e8f800bd40f108362572b" exitCode=0 Oct 06 15:23:03 crc kubenswrapper[4888]: I1006 15:23:03.609264 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"899ff317fa3e35b62f992c01acf0a8a3a07cd1a84a2e8f800bd40f108362572b"} Oct 06 15:23:03 crc kubenswrapper[4888]: I1006 15:23:03.609691 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729"} Oct 06 15:23:03 crc kubenswrapper[4888]: I1006 15:23:03.609714 4888 scope.go:117] "RemoveContainer" containerID="2bf18ef6eff916382fcaa294f56d74e00f198381baa7886ed31f9974dc677b14" Oct 06 15:23:06 crc kubenswrapper[4888]: I1006 15:23:06.656229 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 06 15:23:06 crc kubenswrapper[4888]: I1006 15:23:06.982011 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 06 15:23:10 crc kubenswrapper[4888]: I1006 15:23:10.456384 4888 scope.go:117] "RemoveContainer" containerID="b5a72bf36651f8d766603c0c96d06ec8573e06712ce58396c5afbfef3f771a97" Oct 06 15:23:10 crc kubenswrapper[4888]: I1006 15:23:10.483049 4888 scope.go:117] "RemoveContainer" containerID="005d67fa0a200c095eb5b5921d7bcd67f0554d73647cd131d09c8afe13332788" Oct 06 15:23:10 crc kubenswrapper[4888]: I1006 15:23:10.529395 4888 scope.go:117] "RemoveContainer" containerID="fe2f9dc8b1e5c7fe90cafaba595ece0e1647c9612250257839bcf166ad4860ea" Oct 06 15:23:10 crc kubenswrapper[4888]: I1006 15:23:10.563119 4888 scope.go:117] "RemoveContainer" containerID="eaf1a839f33ce516252f305ec59d0e594303dc44149838e76a5ea3142ed60c6a" Oct 06 15:23:10 crc kubenswrapper[4888]: I1006 15:23:10.589635 4888 scope.go:117] "RemoveContainer" containerID="49de2a23c98aaa27672e46707c5335d1407f833c9febb56ecc031faf137e3b69" Oct 06 15:24:10 crc kubenswrapper[4888]: I1006 15:24:10.782230 4888 scope.go:117] "RemoveContainer" containerID="b48a1660dc35c9828de82116ca6d13f2174568fa3c0bc0119cfa9769973597b9" Oct 06 15:24:10 crc kubenswrapper[4888]: I1006 15:24:10.812046 4888 scope.go:117] "RemoveContainer" containerID="3cfed561717b9a8b3c32b4bb051f5ecf3a5573d04bed4e8747ed7f2521bf7419" Oct 06 15:24:10 crc kubenswrapper[4888]: I1006 15:24:10.848627 4888 scope.go:117] "RemoveContainer" containerID="2bddd7936dfe41981b0ae29c6b9e18894ab0b30b7ff9ce4dcaf00011c115e633" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.895574 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l4ccw"] Oct 06 15:24:54 crc kubenswrapper[4888]: E1006 15:24:54.896619 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" containerName="init" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.896639 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" containerName="init" Oct 06 15:24:54 crc kubenswrapper[4888]: E1006 15:24:54.896658 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" containerName="dnsmasq-dns" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.896666 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" containerName="dnsmasq-dns" Oct 06 15:24:54 crc kubenswrapper[4888]: E1006 15:24:54.896702 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" containerName="init" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.896710 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" containerName="init" Oct 06 15:24:54 crc kubenswrapper[4888]: E1006 15:24:54.896719 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" containerName="dnsmasq-dns" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.896740 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" containerName="dnsmasq-dns" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.896960 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0a50ca-5301-46f9-b2f9-6a2a682973ff" containerName="dnsmasq-dns" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.896974 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5fb1eb6-ac0e-4df5-a606-ba5fc10df5dc" containerName="dnsmasq-dns" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.898258 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.909298 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4ccw"] Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.998656 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-catalog-content\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.998739 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dddc\" (UniqueName: \"kubernetes.io/projected/41475ef0-f0aa-4cfc-8413-1863df4cbec9-kube-api-access-8dddc\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:54 crc kubenswrapper[4888]: I1006 15:24:54.998771 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-utilities\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:55 crc kubenswrapper[4888]: I1006 15:24:55.100757 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-catalog-content\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:55 crc kubenswrapper[4888]: I1006 15:24:55.100969 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dddc\" (UniqueName: \"kubernetes.io/projected/41475ef0-f0aa-4cfc-8413-1863df4cbec9-kube-api-access-8dddc\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:55 crc kubenswrapper[4888]: I1006 15:24:55.101004 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-utilities\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:55 crc kubenswrapper[4888]: I1006 15:24:55.101619 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-utilities\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:55 crc kubenswrapper[4888]: I1006 15:24:55.101737 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-catalog-content\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:55 crc kubenswrapper[4888]: I1006 15:24:55.126080 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dddc\" (UniqueName: \"kubernetes.io/projected/41475ef0-f0aa-4cfc-8413-1863df4cbec9-kube-api-access-8dddc\") pod \"certified-operators-l4ccw\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:55 crc kubenswrapper[4888]: I1006 15:24:55.222771 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:24:56 crc kubenswrapper[4888]: I1006 15:24:56.198155 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l4ccw"] Oct 06 15:24:56 crc kubenswrapper[4888]: I1006 15:24:56.651115 4888 generic.go:334] "Generic (PLEG): container finished" podID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerID="4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253" exitCode=0 Oct 06 15:24:56 crc kubenswrapper[4888]: I1006 15:24:56.651291 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4ccw" event={"ID":"41475ef0-f0aa-4cfc-8413-1863df4cbec9","Type":"ContainerDied","Data":"4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253"} Oct 06 15:24:56 crc kubenswrapper[4888]: I1006 15:24:56.651497 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4ccw" event={"ID":"41475ef0-f0aa-4cfc-8413-1863df4cbec9","Type":"ContainerStarted","Data":"0819d246b1268774554cdd18ce8982a6c1c96a743549bc5938cc9c952211746a"} Oct 06 15:24:58 crc kubenswrapper[4888]: I1006 15:24:58.670263 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4ccw" event={"ID":"41475ef0-f0aa-4cfc-8413-1863df4cbec9","Type":"ContainerStarted","Data":"d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb"} Oct 06 15:24:59 crc kubenswrapper[4888]: I1006 15:24:59.680699 4888 generic.go:334] "Generic (PLEG): container finished" podID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerID="d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb" exitCode=0 Oct 06 15:24:59 crc kubenswrapper[4888]: I1006 15:24:59.681015 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4ccw" event={"ID":"41475ef0-f0aa-4cfc-8413-1863df4cbec9","Type":"ContainerDied","Data":"d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb"} Oct 06 15:25:00 crc kubenswrapper[4888]: I1006 15:25:00.692960 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4ccw" event={"ID":"41475ef0-f0aa-4cfc-8413-1863df4cbec9","Type":"ContainerStarted","Data":"dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39"} Oct 06 15:25:00 crc kubenswrapper[4888]: I1006 15:25:00.708946 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l4ccw" podStartSLOduration=3.304629031 podStartE2EDuration="6.708894696s" podCreationTimestamp="2025-10-06 15:24:54 +0000 UTC" firstStartedPulling="2025-10-06 15:24:56.653158599 +0000 UTC m=+1436.465509307" lastFinishedPulling="2025-10-06 15:25:00.057424254 +0000 UTC m=+1439.869774972" observedRunningTime="2025-10-06 15:25:00.707483689 +0000 UTC m=+1440.519834407" watchObservedRunningTime="2025-10-06 15:25:00.708894696 +0000 UTC m=+1440.521245414" Oct 06 15:25:02 crc kubenswrapper[4888]: I1006 15:25:02.563261 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:25:02 crc kubenswrapper[4888]: I1006 15:25:02.563616 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:25:05 crc kubenswrapper[4888]: I1006 15:25:05.224590 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:25:05 crc kubenswrapper[4888]: I1006 15:25:05.224947 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:25:06 crc kubenswrapper[4888]: I1006 15:25:06.275823 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-l4ccw" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="registry-server" probeResult="failure" output=< Oct 06 15:25:06 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:25:06 crc kubenswrapper[4888]: > Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.479966 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ql4l7"] Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.482748 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.502197 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ql4l7"] Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.572312 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-utilities\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.572463 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-catalog-content\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.572508 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbpl\" (UniqueName: \"kubernetes.io/projected/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-kube-api-access-bjbpl\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.673996 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjbpl\" (UniqueName: \"kubernetes.io/projected/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-kube-api-access-bjbpl\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.674178 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-utilities\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.674347 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-catalog-content\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.675001 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-utilities\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.675015 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-catalog-content\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.720583 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjbpl\" (UniqueName: \"kubernetes.io/projected/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-kube-api-access-bjbpl\") pod \"community-operators-ql4l7\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:09 crc kubenswrapper[4888]: I1006 15:25:09.814114 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:10 crc kubenswrapper[4888]: I1006 15:25:10.417973 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ql4l7"] Oct 06 15:25:10 crc kubenswrapper[4888]: I1006 15:25:10.774534 4888 generic.go:334] "Generic (PLEG): container finished" podID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerID="6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d" exitCode=0 Oct 06 15:25:10 crc kubenswrapper[4888]: I1006 15:25:10.774616 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql4l7" event={"ID":"7bdd9951-89ed-4eff-9ce0-11fc98c612cb","Type":"ContainerDied","Data":"6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d"} Oct 06 15:25:10 crc kubenswrapper[4888]: I1006 15:25:10.775104 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql4l7" event={"ID":"7bdd9951-89ed-4eff-9ce0-11fc98c612cb","Type":"ContainerStarted","Data":"7cd71a64889a9f9de4378950901bb7982516e9215e16f8f5f35a7d1a66658b99"} Oct 06 15:25:11 crc kubenswrapper[4888]: I1006 15:25:11.785170 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql4l7" event={"ID":"7bdd9951-89ed-4eff-9ce0-11fc98c612cb","Type":"ContainerStarted","Data":"0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3"} Oct 06 15:25:13 crc kubenswrapper[4888]: E1006 15:25:13.103392 4888 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bdd9951_89ed_4eff_9ce0_11fc98c612cb.slice/crio-0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3.scope\": RecentStats: unable to find data in memory cache]" Oct 06 15:25:13 crc kubenswrapper[4888]: I1006 15:25:13.804806 4888 generic.go:334] "Generic (PLEG): container finished" podID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerID="0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3" exitCode=0 Oct 06 15:25:13 crc kubenswrapper[4888]: I1006 15:25:13.804844 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql4l7" event={"ID":"7bdd9951-89ed-4eff-9ce0-11fc98c612cb","Type":"ContainerDied","Data":"0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3"} Oct 06 15:25:14 crc kubenswrapper[4888]: I1006 15:25:14.818776 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql4l7" event={"ID":"7bdd9951-89ed-4eff-9ce0-11fc98c612cb","Type":"ContainerStarted","Data":"b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5"} Oct 06 15:25:14 crc kubenswrapper[4888]: I1006 15:25:14.841000 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ql4l7" podStartSLOduration=2.432029992 podStartE2EDuration="5.840978813s" podCreationTimestamp="2025-10-06 15:25:09 +0000 UTC" firstStartedPulling="2025-10-06 15:25:10.776758272 +0000 UTC m=+1450.589108991" lastFinishedPulling="2025-10-06 15:25:14.185707094 +0000 UTC m=+1453.998057812" observedRunningTime="2025-10-06 15:25:14.836500343 +0000 UTC m=+1454.648851071" watchObservedRunningTime="2025-10-06 15:25:14.840978813 +0000 UTC m=+1454.653329531" Oct 06 15:25:15 crc kubenswrapper[4888]: I1006 15:25:15.280431 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:25:15 crc kubenswrapper[4888]: I1006 15:25:15.336524 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:25:16 crc kubenswrapper[4888]: I1006 15:25:16.867476 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l4ccw"] Oct 06 15:25:16 crc kubenswrapper[4888]: I1006 15:25:16.868391 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l4ccw" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="registry-server" containerID="cri-o://dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39" gracePeriod=2 Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.469368 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.647748 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dddc\" (UniqueName: \"kubernetes.io/projected/41475ef0-f0aa-4cfc-8413-1863df4cbec9-kube-api-access-8dddc\") pod \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.647958 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-utilities\") pod \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.647992 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-catalog-content\") pod \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\" (UID: \"41475ef0-f0aa-4cfc-8413-1863df4cbec9\") " Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.648722 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-utilities" (OuterVolumeSpecName: "utilities") pod "41475ef0-f0aa-4cfc-8413-1863df4cbec9" (UID: "41475ef0-f0aa-4cfc-8413-1863df4cbec9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.654716 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41475ef0-f0aa-4cfc-8413-1863df4cbec9-kube-api-access-8dddc" (OuterVolumeSpecName: "kube-api-access-8dddc") pod "41475ef0-f0aa-4cfc-8413-1863df4cbec9" (UID: "41475ef0-f0aa-4cfc-8413-1863df4cbec9"). InnerVolumeSpecName "kube-api-access-8dddc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.693338 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41475ef0-f0aa-4cfc-8413-1863df4cbec9" (UID: "41475ef0-f0aa-4cfc-8413-1863df4cbec9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.749828 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.749875 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41475ef0-f0aa-4cfc-8413-1863df4cbec9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.749890 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dddc\" (UniqueName: \"kubernetes.io/projected/41475ef0-f0aa-4cfc-8413-1863df4cbec9-kube-api-access-8dddc\") on node \"crc\" DevicePath \"\"" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.863919 4888 generic.go:334] "Generic (PLEG): container finished" podID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerID="dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39" exitCode=0 Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.863960 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4ccw" event={"ID":"41475ef0-f0aa-4cfc-8413-1863df4cbec9","Type":"ContainerDied","Data":"dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39"} Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.863990 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l4ccw" event={"ID":"41475ef0-f0aa-4cfc-8413-1863df4cbec9","Type":"ContainerDied","Data":"0819d246b1268774554cdd18ce8982a6c1c96a743549bc5938cc9c952211746a"} Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.864010 4888 scope.go:117] "RemoveContainer" containerID="dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.864022 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l4ccw" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.885664 4888 scope.go:117] "RemoveContainer" containerID="d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.905161 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l4ccw"] Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.914719 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l4ccw"] Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.923101 4888 scope.go:117] "RemoveContainer" containerID="4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.962771 4888 scope.go:117] "RemoveContainer" containerID="dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39" Oct 06 15:25:17 crc kubenswrapper[4888]: E1006 15:25:17.963676 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39\": container with ID starting with dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39 not found: ID does not exist" containerID="dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.963718 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39"} err="failed to get container status \"dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39\": rpc error: code = NotFound desc = could not find container \"dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39\": container with ID starting with dc4eb2fe744d80feefa8d4c799bae96cdcf02b340aad4a9020d36b8db7edad39 not found: ID does not exist" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.963744 4888 scope.go:117] "RemoveContainer" containerID="d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb" Oct 06 15:25:17 crc kubenswrapper[4888]: E1006 15:25:17.964057 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb\": container with ID starting with d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb not found: ID does not exist" containerID="d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.964087 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb"} err="failed to get container status \"d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb\": rpc error: code = NotFound desc = could not find container \"d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb\": container with ID starting with d029d369ffc704b43ecdb607cdad3000f9dd8ab1b207a11a9e1f468cf0f2cefb not found: ID does not exist" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.964103 4888 scope.go:117] "RemoveContainer" containerID="4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253" Oct 06 15:25:17 crc kubenswrapper[4888]: E1006 15:25:17.964286 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253\": container with ID starting with 4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253 not found: ID does not exist" containerID="4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253" Oct 06 15:25:17 crc kubenswrapper[4888]: I1006 15:25:17.964308 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253"} err="failed to get container status \"4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253\": rpc error: code = NotFound desc = could not find container \"4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253\": container with ID starting with 4921eedece9503fae32eab6e0a7cdffe23358fde2192e5fd75632a5ba8e1d253 not found: ID does not exist" Oct 06 15:25:18 crc kubenswrapper[4888]: I1006 15:25:18.937291 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" path="/var/lib/kubelet/pods/41475ef0-f0aa-4cfc-8413-1863df4cbec9/volumes" Oct 06 15:25:19 crc kubenswrapper[4888]: I1006 15:25:19.815214 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:19 crc kubenswrapper[4888]: I1006 15:25:19.815560 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:19 crc kubenswrapper[4888]: I1006 15:25:19.891231 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:19 crc kubenswrapper[4888]: I1006 15:25:19.942219 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:21 crc kubenswrapper[4888]: I1006 15:25:21.054884 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ql4l7"] Oct 06 15:25:21 crc kubenswrapper[4888]: I1006 15:25:21.905019 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ql4l7" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerName="registry-server" containerID="cri-o://b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5" gracePeriod=2 Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.359409 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.536031 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-catalog-content\") pod \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.536572 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjbpl\" (UniqueName: \"kubernetes.io/projected/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-kube-api-access-bjbpl\") pod \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.536630 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-utilities\") pod \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\" (UID: \"7bdd9951-89ed-4eff-9ce0-11fc98c612cb\") " Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.537591 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-utilities" (OuterVolumeSpecName: "utilities") pod "7bdd9951-89ed-4eff-9ce0-11fc98c612cb" (UID: "7bdd9951-89ed-4eff-9ce0-11fc98c612cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.543030 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-kube-api-access-bjbpl" (OuterVolumeSpecName: "kube-api-access-bjbpl") pod "7bdd9951-89ed-4eff-9ce0-11fc98c612cb" (UID: "7bdd9951-89ed-4eff-9ce0-11fc98c612cb"). InnerVolumeSpecName "kube-api-access-bjbpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.585770 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bdd9951-89ed-4eff-9ce0-11fc98c612cb" (UID: "7bdd9951-89ed-4eff-9ce0-11fc98c612cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.639059 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjbpl\" (UniqueName: \"kubernetes.io/projected/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-kube-api-access-bjbpl\") on node \"crc\" DevicePath \"\"" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.639103 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.639118 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdd9951-89ed-4eff-9ce0-11fc98c612cb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.916877 4888 generic.go:334] "Generic (PLEG): container finished" podID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerID="b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5" exitCode=0 Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.916936 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ql4l7" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.916944 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql4l7" event={"ID":"7bdd9951-89ed-4eff-9ce0-11fc98c612cb","Type":"ContainerDied","Data":"b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5"} Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.916974 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ql4l7" event={"ID":"7bdd9951-89ed-4eff-9ce0-11fc98c612cb","Type":"ContainerDied","Data":"7cd71a64889a9f9de4378950901bb7982516e9215e16f8f5f35a7d1a66658b99"} Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.916991 4888 scope.go:117] "RemoveContainer" containerID="b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5" Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.962586 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ql4l7"] Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.968173 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ql4l7"] Oct 06 15:25:22 crc kubenswrapper[4888]: I1006 15:25:22.989311 4888 scope.go:117] "RemoveContainer" containerID="0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3" Oct 06 15:25:23 crc kubenswrapper[4888]: I1006 15:25:23.038395 4888 scope.go:117] "RemoveContainer" containerID="6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d" Oct 06 15:25:23 crc kubenswrapper[4888]: I1006 15:25:23.067744 4888 scope.go:117] "RemoveContainer" containerID="b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5" Oct 06 15:25:23 crc kubenswrapper[4888]: E1006 15:25:23.068261 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5\": container with ID starting with b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5 not found: ID does not exist" containerID="b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5" Oct 06 15:25:23 crc kubenswrapper[4888]: I1006 15:25:23.068330 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5"} err="failed to get container status \"b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5\": rpc error: code = NotFound desc = could not find container \"b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5\": container with ID starting with b22d0493cd7c99b66ff83b1f8fe2b57f1a6ee11bb0d94a615bbac43a4b5618c5 not found: ID does not exist" Oct 06 15:25:23 crc kubenswrapper[4888]: I1006 15:25:23.068363 4888 scope.go:117] "RemoveContainer" containerID="0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3" Oct 06 15:25:23 crc kubenswrapper[4888]: E1006 15:25:23.068764 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3\": container with ID starting with 0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3 not found: ID does not exist" containerID="0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3" Oct 06 15:25:23 crc kubenswrapper[4888]: I1006 15:25:23.068869 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3"} err="failed to get container status \"0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3\": rpc error: code = NotFound desc = could not find container \"0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3\": container with ID starting with 0bed2398413e7cd8e69e573466ad641d757e743aa04d72d933c55390addfd2e3 not found: ID does not exist" Oct 06 15:25:23 crc kubenswrapper[4888]: I1006 15:25:23.068901 4888 scope.go:117] "RemoveContainer" containerID="6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d" Oct 06 15:25:23 crc kubenswrapper[4888]: E1006 15:25:23.069609 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d\": container with ID starting with 6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d not found: ID does not exist" containerID="6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d" Oct 06 15:25:23 crc kubenswrapper[4888]: I1006 15:25:23.069645 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d"} err="failed to get container status \"6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d\": rpc error: code = NotFound desc = could not find container \"6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d\": container with ID starting with 6999bf252914690fd1268db52d06c91198c4ed8397aa5ad9c98e059fc9643d4d not found: ID does not exist" Oct 06 15:25:24 crc kubenswrapper[4888]: I1006 15:25:24.937071 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" path="/var/lib/kubelet/pods/7bdd9951-89ed-4eff-9ce0-11fc98c612cb/volumes" Oct 06 15:25:32 crc kubenswrapper[4888]: I1006 15:25:32.564296 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:25:32 crc kubenswrapper[4888]: I1006 15:25:32.565588 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:26:02 crc kubenswrapper[4888]: I1006 15:26:02.563405 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:26:02 crc kubenswrapper[4888]: I1006 15:26:02.563990 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:26:02 crc kubenswrapper[4888]: I1006 15:26:02.564043 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:26:02 crc kubenswrapper[4888]: I1006 15:26:02.564845 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:26:02 crc kubenswrapper[4888]: I1006 15:26:02.564910 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" gracePeriod=600 Oct 06 15:26:02 crc kubenswrapper[4888]: E1006 15:26:02.699113 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:26:03 crc kubenswrapper[4888]: I1006 15:26:03.299709 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" exitCode=0 Oct 06 15:26:03 crc kubenswrapper[4888]: I1006 15:26:03.299745 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729"} Oct 06 15:26:03 crc kubenswrapper[4888]: I1006 15:26:03.299777 4888 scope.go:117] "RemoveContainer" containerID="899ff317fa3e35b62f992c01acf0a8a3a07cd1a84a2e8f800bd40f108362572b" Oct 06 15:26:03 crc kubenswrapper[4888]: I1006 15:26:03.300655 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:26:03 crc kubenswrapper[4888]: E1006 15:26:03.301026 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:26:10 crc kubenswrapper[4888]: I1006 15:26:10.946468 4888 scope.go:117] "RemoveContainer" containerID="2b4dc6740a016ca6ce1c540a19fd842330a41e0c5c474f23f3081b55582b268f" Oct 06 15:26:10 crc kubenswrapper[4888]: I1006 15:26:10.971623 4888 scope.go:117] "RemoveContainer" containerID="02c0ec3cbfe04095deca4ddfea209723f8faca551206dd967729bf4646a5e311" Oct 06 15:26:15 crc kubenswrapper[4888]: I1006 15:26:15.921773 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:26:15 crc kubenswrapper[4888]: E1006 15:26:15.922439 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:26:26 crc kubenswrapper[4888]: I1006 15:26:26.921144 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:26:26 crc kubenswrapper[4888]: E1006 15:26:26.921868 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:26:37 crc kubenswrapper[4888]: I1006 15:26:37.922584 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:26:37 crc kubenswrapper[4888]: E1006 15:26:37.923512 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:26:48 crc kubenswrapper[4888]: I1006 15:26:48.921148 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:26:48 crc kubenswrapper[4888]: E1006 15:26:48.921943 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.039721 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5hmct"] Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.047872 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-52mbw"] Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.058434 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rx7h6"] Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.066432 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-52mbw"] Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.075248 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5hmct"] Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.084464 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rx7h6"] Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.934969 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ea0efb-53b0-4fac-bb45-f01ce9b6430b" path="/var/lib/kubelet/pods/66ea0efb-53b0-4fac-bb45-f01ce9b6430b/volumes" Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.937775 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9" path="/var/lib/kubelet/pods/bc5a9326-74ad-4a0c-9be1-4ed7f2d526b9/volumes" Oct 06 15:26:52 crc kubenswrapper[4888]: I1006 15:26:52.941149 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b9041d-9cb2-4bd0-a57b-2a884add6fcc" path="/var/lib/kubelet/pods/f5b9041d-9cb2-4bd0-a57b-2a884add6fcc/volumes" Oct 06 15:27:01 crc kubenswrapper[4888]: I1006 15:27:01.921659 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:27:01 crc kubenswrapper[4888]: E1006 15:27:01.922431 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:27:03 crc kubenswrapper[4888]: I1006 15:27:03.049473 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7b73-account-create-wmpks"] Oct 06 15:27:03 crc kubenswrapper[4888]: I1006 15:27:03.058873 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9a2f-account-create-mcsms"] Oct 06 15:27:03 crc kubenswrapper[4888]: I1006 15:27:03.069596 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-89a8-account-create-2p29h"] Oct 06 15:27:03 crc kubenswrapper[4888]: I1006 15:27:03.077206 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9a2f-account-create-mcsms"] Oct 06 15:27:03 crc kubenswrapper[4888]: I1006 15:27:03.084113 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7b73-account-create-wmpks"] Oct 06 15:27:03 crc kubenswrapper[4888]: I1006 15:27:03.092227 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-89a8-account-create-2p29h"] Oct 06 15:27:04 crc kubenswrapper[4888]: I1006 15:27:04.931712 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023135e5-d2fb-4bd2-a8b7-03b214c7c81f" path="/var/lib/kubelet/pods/023135e5-d2fb-4bd2-a8b7-03b214c7c81f/volumes" Oct 06 15:27:04 crc kubenswrapper[4888]: I1006 15:27:04.933246 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb82a472-d981-4b66-8101-ebf6ca21b88b" path="/var/lib/kubelet/pods/bb82a472-d981-4b66-8101-ebf6ca21b88b/volumes" Oct 06 15:27:04 crc kubenswrapper[4888]: I1006 15:27:04.933725 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3443d75-b692-4b7f-86fb-3293f1b817c8" path="/var/lib/kubelet/pods/f3443d75-b692-4b7f-86fb-3293f1b817c8/volumes" Oct 06 15:27:11 crc kubenswrapper[4888]: I1006 15:27:11.051909 4888 scope.go:117] "RemoveContainer" containerID="b14e679cc89b6b8627f7fbc23317dc24ca2d38472f8c83484cb6c0da73708589" Oct 06 15:27:11 crc kubenswrapper[4888]: I1006 15:27:11.074880 4888 scope.go:117] "RemoveContainer" containerID="bd3eb933b88d850edff3606c9aa377b0195929dce7ee87dc9993cf7e88d1988d" Oct 06 15:27:11 crc kubenswrapper[4888]: I1006 15:27:11.120816 4888 scope.go:117] "RemoveContainer" containerID="56b5ec5518b02ae97ce4d12bcffa6537c797d717d17556d70138bf55a39bd1dc" Oct 06 15:27:11 crc kubenswrapper[4888]: I1006 15:27:11.159988 4888 scope.go:117] "RemoveContainer" containerID="731acb55a9a88edf723d3f5a04eacdb75a6bec8a6dca1c5add4804d9b79ffab4" Oct 06 15:27:11 crc kubenswrapper[4888]: I1006 15:27:11.209659 4888 scope.go:117] "RemoveContainer" containerID="260f21e7cf7c4201a96ce0413be2f1548ae7f4f1b89855620c48739cb78b2867" Oct 06 15:27:11 crc kubenswrapper[4888]: I1006 15:27:11.257271 4888 scope.go:117] "RemoveContainer" containerID="5c7412460657d618d8de7270fe20adc30ee6a2b188d7e60c0e74dd8f760228d8" Oct 06 15:27:12 crc kubenswrapper[4888]: I1006 15:27:12.921829 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:27:12 crc kubenswrapper[4888]: E1006 15:27:12.922447 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:27:23 crc kubenswrapper[4888]: I1006 15:27:23.921962 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:27:23 crc kubenswrapper[4888]: E1006 15:27:23.922855 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.049996 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rq7rs"] Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.057701 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5lfmr"] Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.070897 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-nk6fs"] Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.079406 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-nk6fs"] Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.087122 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rq7rs"] Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.093477 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5lfmr"] Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.935922 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553a9a70-1882-4a91-b6a3-383e723f893a" path="/var/lib/kubelet/pods/553a9a70-1882-4a91-b6a3-383e723f893a/volumes" Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.937389 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="666fdea5-da00-46e5-b4af-4b82ca36e313" path="/var/lib/kubelet/pods/666fdea5-da00-46e5-b4af-4b82ca36e313/volumes" Oct 06 15:27:26 crc kubenswrapper[4888]: I1006 15:27:26.938936 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d5c344-92d9-4ade-81d3-49f4c2065516" path="/var/lib/kubelet/pods/89d5c344-92d9-4ade-81d3-49f4c2065516/volumes" Oct 06 15:27:31 crc kubenswrapper[4888]: I1006 15:27:31.042555 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-745cp"] Oct 06 15:27:31 crc kubenswrapper[4888]: I1006 15:27:31.052121 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-745cp"] Oct 06 15:27:32 crc kubenswrapper[4888]: I1006 15:27:32.935616 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8f800f-89a2-4586-9298-81a993e6f60d" path="/var/lib/kubelet/pods/8e8f800f-89a2-4586-9298-81a993e6f60d/volumes" Oct 06 15:27:35 crc kubenswrapper[4888]: I1006 15:27:35.029016 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-28k8r"] Oct 06 15:27:35 crc kubenswrapper[4888]: I1006 15:27:35.037626 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-28k8r"] Oct 06 15:27:36 crc kubenswrapper[4888]: I1006 15:27:36.933984 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b6bbbb-cfd0-4a91-a830-1f1572e4f519" path="/var/lib/kubelet/pods/38b6bbbb-cfd0-4a91-a830-1f1572e4f519/volumes" Oct 06 15:27:38 crc kubenswrapper[4888]: I1006 15:27:38.924684 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:27:38 crc kubenswrapper[4888]: E1006 15:27:38.925403 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:27:45 crc kubenswrapper[4888]: I1006 15:27:45.027444 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e12a-account-create-d7cmc"] Oct 06 15:27:45 crc kubenswrapper[4888]: I1006 15:27:45.036334 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c32f-account-create-g69zm"] Oct 06 15:27:45 crc kubenswrapper[4888]: I1006 15:27:45.046207 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c32f-account-create-g69zm"] Oct 06 15:27:45 crc kubenswrapper[4888]: I1006 15:27:45.055418 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e12a-account-create-d7cmc"] Oct 06 15:27:46 crc kubenswrapper[4888]: I1006 15:27:46.932596 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ada5181-629e-4efb-97e8-e21d5d601d09" path="/var/lib/kubelet/pods/6ada5181-629e-4efb-97e8-e21d5d601d09/volumes" Oct 06 15:27:46 crc kubenswrapper[4888]: I1006 15:27:46.933497 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98f487b-c9a8-402c-9595-d5bc0e7c66fa" path="/var/lib/kubelet/pods/c98f487b-c9a8-402c-9595-d5bc0e7c66fa/volumes" Oct 06 15:27:53 crc kubenswrapper[4888]: I1006 15:27:53.920790 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:27:53 crc kubenswrapper[4888]: E1006 15:27:53.921561 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:27:54 crc kubenswrapper[4888]: I1006 15:27:54.048560 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e5a4-account-create-hf6jn"] Oct 06 15:27:54 crc kubenswrapper[4888]: I1006 15:27:54.056243 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e5a4-account-create-hf6jn"] Oct 06 15:27:54 crc kubenswrapper[4888]: I1006 15:27:54.934117 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e3ab89-3ae3-42ef-b94c-fdc2c205e105" path="/var/lib/kubelet/pods/33e3ab89-3ae3-42ef-b94c-fdc2c205e105/volumes" Oct 06 15:28:02 crc kubenswrapper[4888]: I1006 15:28:02.031741 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5rgjm"] Oct 06 15:28:02 crc kubenswrapper[4888]: I1006 15:28:02.040592 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5rgjm"] Oct 06 15:28:02 crc kubenswrapper[4888]: I1006 15:28:02.950162 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840197d6-f6a9-4bfc-9e0b-74328e475532" path="/var/lib/kubelet/pods/840197d6-f6a9-4bfc-9e0b-74328e475532/volumes" Oct 06 15:28:07 crc kubenswrapper[4888]: I1006 15:28:07.921562 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:28:07 crc kubenswrapper[4888]: E1006 15:28:07.922135 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:28:08 crc kubenswrapper[4888]: I1006 15:28:08.034583 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-knz4q"] Oct 06 15:28:08 crc kubenswrapper[4888]: I1006 15:28:08.043305 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-knz4q"] Oct 06 15:28:08 crc kubenswrapper[4888]: I1006 15:28:08.938431 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1ea537-b6a4-49dc-b215-a6b65ed08933" path="/var/lib/kubelet/pods/df1ea537-b6a4-49dc-b215-a6b65ed08933/volumes" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.426072 4888 scope.go:117] "RemoveContainer" containerID="cce703a67ea4d329fe1e949a7c7513b1fc172b6f07b148ae198f821e84730bc7" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.448814 4888 scope.go:117] "RemoveContainer" containerID="d40cb091e7ed4c9067d7ea42ab8adeda978339f510268435dda27cc2c8de1d25" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.503618 4888 scope.go:117] "RemoveContainer" containerID="899272be0e36f8519906fd7bed60c013326da79f4410c7aed52c1eddcdaa5078" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.550002 4888 scope.go:117] "RemoveContainer" containerID="dba918561bae064cf2c006b20fab52b6d9ff00dbadd6eab14808d524d3b27987" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.607091 4888 scope.go:117] "RemoveContainer" containerID="1a3810fb19d037c2da4c4ba0d360690222f91f07ad3ac2966c92f7c0c9e668de" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.640778 4888 scope.go:117] "RemoveContainer" containerID="da3ff19de0e0693f54dca0017bb4177218b6679a6c097a42bbf619d52b34f2f7" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.682895 4888 scope.go:117] "RemoveContainer" containerID="0f5545a4e8a44e3a5bb9b396c3a93489ed81320867af53589aba298feb307ccf" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.707294 4888 scope.go:117] "RemoveContainer" containerID="0a59a477c2f35050fcedc97ec00c7401faa46c57445db59a68671be281e62d11" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.737953 4888 scope.go:117] "RemoveContainer" containerID="5d78e5738988f3a051fad138a86641d05f7d98375731cbc9ec37fdd673bfc9d1" Oct 06 15:28:11 crc kubenswrapper[4888]: I1006 15:28:11.762476 4888 scope.go:117] "RemoveContainer" containerID="3cbccdc8bfdf1fedc812dd4f543dab3b10769aa7f88f9e7bde8dce16828cefbd" Oct 06 15:28:21 crc kubenswrapper[4888]: I1006 15:28:21.921865 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:28:21 crc kubenswrapper[4888]: E1006 15:28:21.922547 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:28:31 crc kubenswrapper[4888]: I1006 15:28:31.048127 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-j2tl6"] Oct 06 15:28:31 crc kubenswrapper[4888]: I1006 15:28:31.060933 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-j2tl6"] Oct 06 15:28:32 crc kubenswrapper[4888]: I1006 15:28:32.923042 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:28:32 crc kubenswrapper[4888]: E1006 15:28:32.924173 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:28:32 crc kubenswrapper[4888]: I1006 15:28:32.933671 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e36f3be-2b0f-45e6-8275-b66240419057" path="/var/lib/kubelet/pods/1e36f3be-2b0f-45e6-8275-b66240419057/volumes" Oct 06 15:28:41 crc kubenswrapper[4888]: I1006 15:28:41.031520 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jxszs"] Oct 06 15:28:41 crc kubenswrapper[4888]: I1006 15:28:41.041580 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jxszs"] Oct 06 15:28:42 crc kubenswrapper[4888]: I1006 15:28:42.935387 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832f4bfd-1fa5-48ca-87c4-eecd280e1aa0" path="/var/lib/kubelet/pods/832f4bfd-1fa5-48ca-87c4-eecd280e1aa0/volumes" Oct 06 15:28:44 crc kubenswrapper[4888]: I1006 15:28:44.922049 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:28:44 crc kubenswrapper[4888]: E1006 15:28:44.922956 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:28:55 crc kubenswrapper[4888]: I1006 15:28:55.921441 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:28:55 crc kubenswrapper[4888]: E1006 15:28:55.922181 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:29:09 crc kubenswrapper[4888]: I1006 15:29:09.921431 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:29:09 crc kubenswrapper[4888]: E1006 15:29:09.922150 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:29:10 crc kubenswrapper[4888]: I1006 15:29:10.049123 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-h6d7m"] Oct 06 15:29:10 crc kubenswrapper[4888]: I1006 15:29:10.059219 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-h6d7m"] Oct 06 15:29:10 crc kubenswrapper[4888]: I1006 15:29:10.935169 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caf441af-cd19-416e-9759-8634523c0979" path="/var/lib/kubelet/pods/caf441af-cd19-416e-9759-8634523c0979/volumes" Oct 06 15:29:11 crc kubenswrapper[4888]: I1006 15:29:11.970442 4888 scope.go:117] "RemoveContainer" containerID="d7855d9ca6a7ab78b4abd7cc27af768c258f7fb42a42a0ceff0b2defe61db629" Oct 06 15:29:12 crc kubenswrapper[4888]: I1006 15:29:12.006886 4888 scope.go:117] "RemoveContainer" containerID="ac33590e39ab8ecff2307075d973a663542b0a3e362e88171088635fbad92126" Oct 06 15:29:12 crc kubenswrapper[4888]: I1006 15:29:12.053004 4888 scope.go:117] "RemoveContainer" containerID="6716d3f3ec03b19e57a35133a801ce1bae9aebc742544db97a4df7f98df283b3" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.480224 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lpdhn"] Oct 06 15:29:16 crc kubenswrapper[4888]: E1006 15:29:16.480991 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerName="registry-server" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.481007 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerName="registry-server" Oct 06 15:29:16 crc kubenswrapper[4888]: E1006 15:29:16.481023 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerName="extract-content" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.481031 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerName="extract-content" Oct 06 15:29:16 crc kubenswrapper[4888]: E1006 15:29:16.481057 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="extract-utilities" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.481065 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="extract-utilities" Oct 06 15:29:16 crc kubenswrapper[4888]: E1006 15:29:16.481075 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="extract-content" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.481081 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="extract-content" Oct 06 15:29:16 crc kubenswrapper[4888]: E1006 15:29:16.481095 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="registry-server" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.481102 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="registry-server" Oct 06 15:29:16 crc kubenswrapper[4888]: E1006 15:29:16.481124 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerName="extract-utilities" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.481131 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerName="extract-utilities" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.481347 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="41475ef0-f0aa-4cfc-8413-1863df4cbec9" containerName="registry-server" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.481367 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bdd9951-89ed-4eff-9ce0-11fc98c612cb" containerName="registry-server" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.483007 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.491190 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpdhn"] Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.598434 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-utilities\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.598517 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghvj2\" (UniqueName: \"kubernetes.io/projected/458319ab-dbcf-4d7a-bec9-bb00717e80a1-kube-api-access-ghvj2\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.598610 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-catalog-content\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.675648 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9k5dr"] Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.677969 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.693345 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9k5dr"] Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.700981 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-catalog-content\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.701123 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-utilities\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.701197 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghvj2\" (UniqueName: \"kubernetes.io/projected/458319ab-dbcf-4d7a-bec9-bb00717e80a1-kube-api-access-ghvj2\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.701422 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-catalog-content\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.701631 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-utilities\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.727092 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghvj2\" (UniqueName: \"kubernetes.io/projected/458319ab-dbcf-4d7a-bec9-bb00717e80a1-kube-api-access-ghvj2\") pod \"redhat-operators-lpdhn\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.802437 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xfcn\" (UniqueName: \"kubernetes.io/projected/1f738d8e-d870-4672-bbdd-44b18c9cf27d-kube-api-access-7xfcn\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.802751 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-catalog-content\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.803119 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-utilities\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.805169 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.904950 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-utilities\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.905337 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xfcn\" (UniqueName: \"kubernetes.io/projected/1f738d8e-d870-4672-bbdd-44b18c9cf27d-kube-api-access-7xfcn\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.905463 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-catalog-content\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.905691 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-utilities\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.906182 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-catalog-content\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:16 crc kubenswrapper[4888]: I1006 15:29:16.927629 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xfcn\" (UniqueName: \"kubernetes.io/projected/1f738d8e-d870-4672-bbdd-44b18c9cf27d-kube-api-access-7xfcn\") pod \"redhat-marketplace-9k5dr\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.004390 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.261917 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpdhn"] Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.471553 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9k5dr"] Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.967675 4888 generic.go:334] "Generic (PLEG): container finished" podID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerID="5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf" exitCode=0 Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.967789 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpdhn" event={"ID":"458319ab-dbcf-4d7a-bec9-bb00717e80a1","Type":"ContainerDied","Data":"5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf"} Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.968010 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpdhn" event={"ID":"458319ab-dbcf-4d7a-bec9-bb00717e80a1","Type":"ContainerStarted","Data":"a844d5d19f832f70d4156d68ab1a9da68fade6d5db2a80fd579d0e6db22c436c"} Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.969623 4888 generic.go:334] "Generic (PLEG): container finished" podID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerID="29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b" exitCode=0 Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.969652 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9k5dr" event={"ID":"1f738d8e-d870-4672-bbdd-44b18c9cf27d","Type":"ContainerDied","Data":"29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b"} Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.969697 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9k5dr" event={"ID":"1f738d8e-d870-4672-bbdd-44b18c9cf27d","Type":"ContainerStarted","Data":"2a8fa1b1cd2a48ae99eecfb0c72e11f7e404e0193f273628703aefd2c434d990"} Oct 06 15:29:17 crc kubenswrapper[4888]: I1006 15:29:17.970085 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:29:18 crc kubenswrapper[4888]: I1006 15:29:18.979850 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpdhn" event={"ID":"458319ab-dbcf-4d7a-bec9-bb00717e80a1","Type":"ContainerStarted","Data":"09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418"} Oct 06 15:29:18 crc kubenswrapper[4888]: I1006 15:29:18.988289 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9k5dr" event={"ID":"1f738d8e-d870-4672-bbdd-44b18c9cf27d","Type":"ContainerStarted","Data":"e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac"} Oct 06 15:29:20 crc kubenswrapper[4888]: I1006 15:29:20.009705 4888 generic.go:334] "Generic (PLEG): container finished" podID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerID="e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac" exitCode=0 Oct 06 15:29:20 crc kubenswrapper[4888]: I1006 15:29:20.009779 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9k5dr" event={"ID":"1f738d8e-d870-4672-bbdd-44b18c9cf27d","Type":"ContainerDied","Data":"e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac"} Oct 06 15:29:21 crc kubenswrapper[4888]: I1006 15:29:21.921132 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:29:21 crc kubenswrapper[4888]: E1006 15:29:21.921766 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:29:22 crc kubenswrapper[4888]: I1006 15:29:22.030186 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9k5dr" event={"ID":"1f738d8e-d870-4672-bbdd-44b18c9cf27d","Type":"ContainerStarted","Data":"f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb"} Oct 06 15:29:22 crc kubenswrapper[4888]: I1006 15:29:22.055513 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9k5dr" podStartSLOduration=2.8340171400000003 podStartE2EDuration="6.055494801s" podCreationTimestamp="2025-10-06 15:29:16 +0000 UTC" firstStartedPulling="2025-10-06 15:29:17.970826422 +0000 UTC m=+1697.783177140" lastFinishedPulling="2025-10-06 15:29:21.192304063 +0000 UTC m=+1701.004654801" observedRunningTime="2025-10-06 15:29:22.04946762 +0000 UTC m=+1701.861818348" watchObservedRunningTime="2025-10-06 15:29:22.055494801 +0000 UTC m=+1701.867845519" Oct 06 15:29:26 crc kubenswrapper[4888]: I1006 15:29:26.067517 4888 generic.go:334] "Generic (PLEG): container finished" podID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerID="09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418" exitCode=0 Oct 06 15:29:26 crc kubenswrapper[4888]: I1006 15:29:26.067603 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpdhn" event={"ID":"458319ab-dbcf-4d7a-bec9-bb00717e80a1","Type":"ContainerDied","Data":"09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418"} Oct 06 15:29:27 crc kubenswrapper[4888]: I1006 15:29:27.005615 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:27 crc kubenswrapper[4888]: I1006 15:29:27.005930 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:27 crc kubenswrapper[4888]: I1006 15:29:27.060206 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:27 crc kubenswrapper[4888]: I1006 15:29:27.078730 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpdhn" event={"ID":"458319ab-dbcf-4d7a-bec9-bb00717e80a1","Type":"ContainerStarted","Data":"a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3"} Oct 06 15:29:27 crc kubenswrapper[4888]: I1006 15:29:27.110248 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lpdhn" podStartSLOduration=2.622990901 podStartE2EDuration="11.110224434s" podCreationTimestamp="2025-10-06 15:29:16 +0000 UTC" firstStartedPulling="2025-10-06 15:29:17.969825291 +0000 UTC m=+1697.782176009" lastFinishedPulling="2025-10-06 15:29:26.457058814 +0000 UTC m=+1706.269409542" observedRunningTime="2025-10-06 15:29:27.105041371 +0000 UTC m=+1706.917392099" watchObservedRunningTime="2025-10-06 15:29:27.110224434 +0000 UTC m=+1706.922575152" Oct 06 15:29:27 crc kubenswrapper[4888]: I1006 15:29:27.124674 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:28 crc kubenswrapper[4888]: I1006 15:29:28.303854 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9k5dr"] Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.092849 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9k5dr" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerName="registry-server" containerID="cri-o://f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb" gracePeriod=2 Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.518888 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.546944 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-catalog-content\") pod \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.546993 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xfcn\" (UniqueName: \"kubernetes.io/projected/1f738d8e-d870-4672-bbdd-44b18c9cf27d-kube-api-access-7xfcn\") pod \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.547174 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-utilities\") pod \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\" (UID: \"1f738d8e-d870-4672-bbdd-44b18c9cf27d\") " Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.547710 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-utilities" (OuterVolumeSpecName: "utilities") pod "1f738d8e-d870-4672-bbdd-44b18c9cf27d" (UID: "1f738d8e-d870-4672-bbdd-44b18c9cf27d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.555608 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f738d8e-d870-4672-bbdd-44b18c9cf27d-kube-api-access-7xfcn" (OuterVolumeSpecName: "kube-api-access-7xfcn") pod "1f738d8e-d870-4672-bbdd-44b18c9cf27d" (UID: "1f738d8e-d870-4672-bbdd-44b18c9cf27d"). InnerVolumeSpecName "kube-api-access-7xfcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.559740 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f738d8e-d870-4672-bbdd-44b18c9cf27d" (UID: "1f738d8e-d870-4672-bbdd-44b18c9cf27d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.649471 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.649501 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f738d8e-d870-4672-bbdd-44b18c9cf27d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:29:29 crc kubenswrapper[4888]: I1006 15:29:29.649513 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xfcn\" (UniqueName: \"kubernetes.io/projected/1f738d8e-d870-4672-bbdd-44b18c9cf27d-kube-api-access-7xfcn\") on node \"crc\" DevicePath \"\"" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.103552 4888 generic.go:334] "Generic (PLEG): container finished" podID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerID="f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb" exitCode=0 Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.103607 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9k5dr" event={"ID":"1f738d8e-d870-4672-bbdd-44b18c9cf27d","Type":"ContainerDied","Data":"f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb"} Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.103615 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9k5dr" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.103911 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9k5dr" event={"ID":"1f738d8e-d870-4672-bbdd-44b18c9cf27d","Type":"ContainerDied","Data":"2a8fa1b1cd2a48ae99eecfb0c72e11f7e404e0193f273628703aefd2c434d990"} Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.103944 4888 scope.go:117] "RemoveContainer" containerID="f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.126105 4888 scope.go:117] "RemoveContainer" containerID="e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.147930 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9k5dr"] Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.154876 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9k5dr"] Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.160123 4888 scope.go:117] "RemoveContainer" containerID="29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.203089 4888 scope.go:117] "RemoveContainer" containerID="f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb" Oct 06 15:29:30 crc kubenswrapper[4888]: E1006 15:29:30.203537 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb\": container with ID starting with f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb not found: ID does not exist" containerID="f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.203570 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb"} err="failed to get container status \"f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb\": rpc error: code = NotFound desc = could not find container \"f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb\": container with ID starting with f45abbd9196213a56b844c8f6259c902f3ee5132780b61ffbef3edb5622a2dbb not found: ID does not exist" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.203597 4888 scope.go:117] "RemoveContainer" containerID="e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac" Oct 06 15:29:30 crc kubenswrapper[4888]: E1006 15:29:30.203945 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac\": container with ID starting with e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac not found: ID does not exist" containerID="e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.203994 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac"} err="failed to get container status \"e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac\": rpc error: code = NotFound desc = could not find container \"e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac\": container with ID starting with e74d37a18f475213fe55a28252c6de28c8f905973c04910bbb302b2a2fcf58ac not found: ID does not exist" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.204025 4888 scope.go:117] "RemoveContainer" containerID="29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b" Oct 06 15:29:30 crc kubenswrapper[4888]: E1006 15:29:30.204296 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b\": container with ID starting with 29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b not found: ID does not exist" containerID="29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.204320 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b"} err="failed to get container status \"29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b\": rpc error: code = NotFound desc = could not find container \"29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b\": container with ID starting with 29504a9f611a278ac53eb0f65203f87a461044c964e04fb22887758d25e40c0b not found: ID does not exist" Oct 06 15:29:30 crc kubenswrapper[4888]: I1006 15:29:30.932331 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" path="/var/lib/kubelet/pods/1f738d8e-d870-4672-bbdd-44b18c9cf27d/volumes" Oct 06 15:29:32 crc kubenswrapper[4888]: I1006 15:29:32.921482 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:29:32 crc kubenswrapper[4888]: E1006 15:29:32.922045 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:29:36 crc kubenswrapper[4888]: I1006 15:29:36.805500 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:36 crc kubenswrapper[4888]: I1006 15:29:36.806110 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:36 crc kubenswrapper[4888]: I1006 15:29:36.850355 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:37 crc kubenswrapper[4888]: I1006 15:29:37.206223 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:37 crc kubenswrapper[4888]: I1006 15:29:37.261769 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpdhn"] Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.180297 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lpdhn" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerName="registry-server" containerID="cri-o://a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3" gracePeriod=2 Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.649129 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.831646 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-catalog-content\") pod \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.831845 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-utilities\") pod \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.831995 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghvj2\" (UniqueName: \"kubernetes.io/projected/458319ab-dbcf-4d7a-bec9-bb00717e80a1-kube-api-access-ghvj2\") pod \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\" (UID: \"458319ab-dbcf-4d7a-bec9-bb00717e80a1\") " Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.832595 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-utilities" (OuterVolumeSpecName: "utilities") pod "458319ab-dbcf-4d7a-bec9-bb00717e80a1" (UID: "458319ab-dbcf-4d7a-bec9-bb00717e80a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.840637 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458319ab-dbcf-4d7a-bec9-bb00717e80a1-kube-api-access-ghvj2" (OuterVolumeSpecName: "kube-api-access-ghvj2") pod "458319ab-dbcf-4d7a-bec9-bb00717e80a1" (UID: "458319ab-dbcf-4d7a-bec9-bb00717e80a1"). InnerVolumeSpecName "kube-api-access-ghvj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.923287 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "458319ab-dbcf-4d7a-bec9-bb00717e80a1" (UID: "458319ab-dbcf-4d7a-bec9-bb00717e80a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.934333 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.934361 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghvj2\" (UniqueName: \"kubernetes.io/projected/458319ab-dbcf-4d7a-bec9-bb00717e80a1-kube-api-access-ghvj2\") on node \"crc\" DevicePath \"\"" Oct 06 15:29:39 crc kubenswrapper[4888]: I1006 15:29:39.934372 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458319ab-dbcf-4d7a-bec9-bb00717e80a1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.189364 4888 generic.go:334] "Generic (PLEG): container finished" podID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerID="a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3" exitCode=0 Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.189411 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpdhn" event={"ID":"458319ab-dbcf-4d7a-bec9-bb00717e80a1","Type":"ContainerDied","Data":"a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3"} Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.189427 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpdhn" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.189442 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpdhn" event={"ID":"458319ab-dbcf-4d7a-bec9-bb00717e80a1","Type":"ContainerDied","Data":"a844d5d19f832f70d4156d68ab1a9da68fade6d5db2a80fd579d0e6db22c436c"} Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.189460 4888 scope.go:117] "RemoveContainer" containerID="a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.217263 4888 scope.go:117] "RemoveContainer" containerID="09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.224562 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpdhn"] Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.246509 4888 scope.go:117] "RemoveContainer" containerID="5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.252639 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lpdhn"] Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.286146 4888 scope.go:117] "RemoveContainer" containerID="a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3" Oct 06 15:29:40 crc kubenswrapper[4888]: E1006 15:29:40.286560 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3\": container with ID starting with a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3 not found: ID does not exist" containerID="a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.286615 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3"} err="failed to get container status \"a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3\": rpc error: code = NotFound desc = could not find container \"a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3\": container with ID starting with a2e1e57209e6c6c1df6566468b3ae722eda04ae5b2302102b8016d18e7e418f3 not found: ID does not exist" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.286637 4888 scope.go:117] "RemoveContainer" containerID="09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418" Oct 06 15:29:40 crc kubenswrapper[4888]: E1006 15:29:40.287221 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418\": container with ID starting with 09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418 not found: ID does not exist" containerID="09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.287246 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418"} err="failed to get container status \"09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418\": rpc error: code = NotFound desc = could not find container \"09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418\": container with ID starting with 09070460e3669b9e7268e73d68ebd8b155045b8197bfa7caac101f5f895d9418 not found: ID does not exist" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.287265 4888 scope.go:117] "RemoveContainer" containerID="5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf" Oct 06 15:29:40 crc kubenswrapper[4888]: E1006 15:29:40.287601 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf\": container with ID starting with 5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf not found: ID does not exist" containerID="5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.287645 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf"} err="failed to get container status \"5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf\": rpc error: code = NotFound desc = could not find container \"5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf\": container with ID starting with 5d8801642411d0db5cebd38b3e339f689f6aa68c4d0bc420c0bb20df1795b7bf not found: ID does not exist" Oct 06 15:29:40 crc kubenswrapper[4888]: I1006 15:29:40.933163 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" path="/var/lib/kubelet/pods/458319ab-dbcf-4d7a-bec9-bb00717e80a1/volumes" Oct 06 15:29:41 crc kubenswrapper[4888]: I1006 15:29:41.053638 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pmxgb"] Oct 06 15:29:41 crc kubenswrapper[4888]: I1006 15:29:41.063411 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-hzrxr"] Oct 06 15:29:41 crc kubenswrapper[4888]: I1006 15:29:41.070466 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qd58v"] Oct 06 15:29:41 crc kubenswrapper[4888]: I1006 15:29:41.076863 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pmxgb"] Oct 06 15:29:41 crc kubenswrapper[4888]: I1006 15:29:41.084414 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-hzrxr"] Oct 06 15:29:41 crc kubenswrapper[4888]: I1006 15:29:41.106183 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qd58v"] Oct 06 15:29:42 crc kubenswrapper[4888]: I1006 15:29:42.936147 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94e623f3-1187-456b-97e3-d18f0f278c19" path="/var/lib/kubelet/pods/94e623f3-1187-456b-97e3-d18f0f278c19/volumes" Oct 06 15:29:42 crc kubenswrapper[4888]: I1006 15:29:42.938201 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ea57a3-51e1-49df-a9cb-531568e7867a" path="/var/lib/kubelet/pods/a7ea57a3-51e1-49df-a9cb-531568e7867a/volumes" Oct 06 15:29:42 crc kubenswrapper[4888]: I1006 15:29:42.938754 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d229f482-25b6-4513-a024-45b8556fe4a4" path="/var/lib/kubelet/pods/d229f482-25b6-4513-a024-45b8556fe4a4/volumes" Oct 06 15:29:46 crc kubenswrapper[4888]: I1006 15:29:46.922727 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:29:46 crc kubenswrapper[4888]: E1006 15:29:46.923336 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:29:51 crc kubenswrapper[4888]: I1006 15:29:51.061221 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b448-account-create-b9xtx"] Oct 06 15:29:51 crc kubenswrapper[4888]: I1006 15:29:51.070215 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-59e6-account-create-zd76t"] Oct 06 15:29:51 crc kubenswrapper[4888]: I1006 15:29:51.079942 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b448-account-create-b9xtx"] Oct 06 15:29:51 crc kubenswrapper[4888]: I1006 15:29:51.089119 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-59e6-account-create-zd76t"] Oct 06 15:29:52 crc kubenswrapper[4888]: I1006 15:29:52.935514 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ccea228-64b2-4ec5-a967-31d1430a8614" path="/var/lib/kubelet/pods/1ccea228-64b2-4ec5-a967-31d1430a8614/volumes" Oct 06 15:29:52 crc kubenswrapper[4888]: I1006 15:29:52.938263 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd825252-f2fb-4127-b909-9fa9be7d6d39" path="/var/lib/kubelet/pods/dd825252-f2fb-4127-b909-9fa9be7d6d39/volumes" Oct 06 15:29:57 crc kubenswrapper[4888]: I1006 15:29:57.921814 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:29:57 crc kubenswrapper[4888]: E1006 15:29:57.922397 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.147334 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh"] Oct 06 15:30:00 crc kubenswrapper[4888]: E1006 15:30:00.148519 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerName="extract-content" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.148539 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerName="extract-content" Oct 06 15:30:00 crc kubenswrapper[4888]: E1006 15:30:00.148548 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerName="registry-server" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.148555 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerName="registry-server" Oct 06 15:30:00 crc kubenswrapper[4888]: E1006 15:30:00.148594 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerName="extract-utilities" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.148603 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerName="extract-utilities" Oct 06 15:30:00 crc kubenswrapper[4888]: E1006 15:30:00.148614 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerName="extract-content" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.148620 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerName="extract-content" Oct 06 15:30:00 crc kubenswrapper[4888]: E1006 15:30:00.148639 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerName="extract-utilities" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.148646 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerName="extract-utilities" Oct 06 15:30:00 crc kubenswrapper[4888]: E1006 15:30:00.148668 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerName="registry-server" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.148675 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerName="registry-server" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.148919 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f738d8e-d870-4672-bbdd-44b18c9cf27d" containerName="registry-server" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.148937 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="458319ab-dbcf-4d7a-bec9-bb00717e80a1" containerName="registry-server" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.149786 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.151456 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.152083 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.158310 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh"] Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.303470 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-secret-volume\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.303522 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-config-volume\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.303881 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l74kj\" (UniqueName: \"kubernetes.io/projected/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-kube-api-access-l74kj\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.406161 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l74kj\" (UniqueName: \"kubernetes.io/projected/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-kube-api-access-l74kj\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.406572 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-secret-volume\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.406684 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-config-volume\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.407543 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-config-volume\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.419428 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-secret-volume\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.423374 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l74kj\" (UniqueName: \"kubernetes.io/projected/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-kube-api-access-l74kj\") pod \"collect-profiles-29329410-95mjh\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.489963 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:00 crc kubenswrapper[4888]: I1006 15:30:00.956265 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh"] Oct 06 15:30:01 crc kubenswrapper[4888]: I1006 15:30:01.375530 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" event={"ID":"6751bf31-8e5c-471f-bdbb-1ddd06bcf233","Type":"ContainerStarted","Data":"8b6f4bd15e238a68652f75eb7278c12dd92f400fe95bc8ac235ed8141beab749"} Oct 06 15:30:01 crc kubenswrapper[4888]: I1006 15:30:01.375578 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" event={"ID":"6751bf31-8e5c-471f-bdbb-1ddd06bcf233","Type":"ContainerStarted","Data":"7cc8680fb8b251343cb87855659c35d7c556b9df436bdaf108401fa219843361"} Oct 06 15:30:01 crc kubenswrapper[4888]: I1006 15:30:01.397780 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" podStartSLOduration=1.397760826 podStartE2EDuration="1.397760826s" podCreationTimestamp="2025-10-06 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 15:30:01.391234849 +0000 UTC m=+1741.203585577" watchObservedRunningTime="2025-10-06 15:30:01.397760826 +0000 UTC m=+1741.210111544" Oct 06 15:30:02 crc kubenswrapper[4888]: I1006 15:30:02.386424 4888 generic.go:334] "Generic (PLEG): container finished" podID="6751bf31-8e5c-471f-bdbb-1ddd06bcf233" containerID="8b6f4bd15e238a68652f75eb7278c12dd92f400fe95bc8ac235ed8141beab749" exitCode=0 Oct 06 15:30:02 crc kubenswrapper[4888]: I1006 15:30:02.386529 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" event={"ID":"6751bf31-8e5c-471f-bdbb-1ddd06bcf233","Type":"ContainerDied","Data":"8b6f4bd15e238a68652f75eb7278c12dd92f400fe95bc8ac235ed8141beab749"} Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.766615 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.877959 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-config-volume\") pod \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.878250 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l74kj\" (UniqueName: \"kubernetes.io/projected/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-kube-api-access-l74kj\") pod \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.878387 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-secret-volume\") pod \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\" (UID: \"6751bf31-8e5c-471f-bdbb-1ddd06bcf233\") " Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.878733 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-config-volume" (OuterVolumeSpecName: "config-volume") pod "6751bf31-8e5c-471f-bdbb-1ddd06bcf233" (UID: "6751bf31-8e5c-471f-bdbb-1ddd06bcf233"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.879623 4888 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.884005 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6751bf31-8e5c-471f-bdbb-1ddd06bcf233" (UID: "6751bf31-8e5c-471f-bdbb-1ddd06bcf233"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.886387 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-kube-api-access-l74kj" (OuterVolumeSpecName: "kube-api-access-l74kj") pod "6751bf31-8e5c-471f-bdbb-1ddd06bcf233" (UID: "6751bf31-8e5c-471f-bdbb-1ddd06bcf233"). InnerVolumeSpecName "kube-api-access-l74kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.981131 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l74kj\" (UniqueName: \"kubernetes.io/projected/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-kube-api-access-l74kj\") on node \"crc\" DevicePath \"\"" Oct 06 15:30:03 crc kubenswrapper[4888]: I1006 15:30:03.981421 4888 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6751bf31-8e5c-471f-bdbb-1ddd06bcf233-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:30:04 crc kubenswrapper[4888]: I1006 15:30:04.412889 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" event={"ID":"6751bf31-8e5c-471f-bdbb-1ddd06bcf233","Type":"ContainerDied","Data":"7cc8680fb8b251343cb87855659c35d7c556b9df436bdaf108401fa219843361"} Oct 06 15:30:04 crc kubenswrapper[4888]: I1006 15:30:04.412933 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cc8680fb8b251343cb87855659c35d7c556b9df436bdaf108401fa219843361" Oct 06 15:30:04 crc kubenswrapper[4888]: I1006 15:30:04.413004 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh" Oct 06 15:30:05 crc kubenswrapper[4888]: I1006 15:30:05.029524 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0ed5-account-create-8k69s"] Oct 06 15:30:05 crc kubenswrapper[4888]: I1006 15:30:05.038168 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0ed5-account-create-8k69s"] Oct 06 15:30:06 crc kubenswrapper[4888]: I1006 15:30:06.932303 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c1c65d-3a97-48cc-8377-cbc4cb23ddac" path="/var/lib/kubelet/pods/71c1c65d-3a97-48cc-8377-cbc4cb23ddac/volumes" Oct 06 15:30:10 crc kubenswrapper[4888]: I1006 15:30:10.926441 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:30:10 crc kubenswrapper[4888]: E1006 15:30:10.927270 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:30:12 crc kubenswrapper[4888]: I1006 15:30:12.148145 4888 scope.go:117] "RemoveContainer" containerID="eca652b7a8d8d03eeca70d3351a83368318a5bb003d8f25f125dd15a707163bb" Oct 06 15:30:12 crc kubenswrapper[4888]: I1006 15:30:12.179478 4888 scope.go:117] "RemoveContainer" containerID="d484eb88e93f948470fe3fe31abb4a630f536bc69d8957f395387cc2f0d2ab10" Oct 06 15:30:12 crc kubenswrapper[4888]: I1006 15:30:12.249485 4888 scope.go:117] "RemoveContainer" containerID="5db4f16a31b5f00b1351bc57bf8e218e803f66df4667b77a6ab63e536f81f1d2" Oct 06 15:30:12 crc kubenswrapper[4888]: I1006 15:30:12.287054 4888 scope.go:117] "RemoveContainer" containerID="4281cc046ec695e7543ebb463a22ff1ccdb80f6108b67e6d442bf93c46a88618" Oct 06 15:30:12 crc kubenswrapper[4888]: I1006 15:30:12.340480 4888 scope.go:117] "RemoveContainer" containerID="4fda17590c9dd1bd00f0d662562cab9f76d3e474f1dc92ac032e82d74abfc5ec" Oct 06 15:30:12 crc kubenswrapper[4888]: I1006 15:30:12.391557 4888 scope.go:117] "RemoveContainer" containerID="a591a4654ae45b59eddaf24afb8d08d20fb3aa55d56a7b0cd3bb8d157653929f" Oct 06 15:30:17 crc kubenswrapper[4888]: I1006 15:30:17.031828 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-snfqf"] Oct 06 15:30:17 crc kubenswrapper[4888]: I1006 15:30:17.035704 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-snfqf"] Oct 06 15:30:18 crc kubenswrapper[4888]: I1006 15:30:18.933669 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b31342-1634-47b6-ac0e-a6f4937111f7" path="/var/lib/kubelet/pods/28b31342-1634-47b6-ac0e-a6f4937111f7/volumes" Oct 06 15:30:22 crc kubenswrapper[4888]: I1006 15:30:22.921370 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:30:22 crc kubenswrapper[4888]: E1006 15:30:22.923052 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:30:34 crc kubenswrapper[4888]: I1006 15:30:34.922175 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:30:34 crc kubenswrapper[4888]: E1006 15:30:34.922734 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:30:44 crc kubenswrapper[4888]: I1006 15:30:44.049264 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-s4mwc"] Oct 06 15:30:44 crc kubenswrapper[4888]: I1006 15:30:44.059315 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-s4mwc"] Oct 06 15:30:44 crc kubenswrapper[4888]: I1006 15:30:44.936335 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e63fa23-500a-4bfa-9231-e4a6e0d7615d" path="/var/lib/kubelet/pods/8e63fa23-500a-4bfa-9231-e4a6e0d7615d/volumes" Oct 06 15:30:46 crc kubenswrapper[4888]: I1006 15:30:46.032749 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6727f"] Oct 06 15:30:46 crc kubenswrapper[4888]: I1006 15:30:46.056567 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6727f"] Oct 06 15:30:46 crc kubenswrapper[4888]: I1006 15:30:46.933007 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b4955a-ce90-41d6-a9be-1b46072c3ab1" path="/var/lib/kubelet/pods/49b4955a-ce90-41d6-a9be-1b46072c3ab1/volumes" Oct 06 15:30:47 crc kubenswrapper[4888]: I1006 15:30:47.921692 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:30:47 crc kubenswrapper[4888]: E1006 15:30:47.922025 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:30:58 crc kubenswrapper[4888]: I1006 15:30:58.921664 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:30:58 crc kubenswrapper[4888]: E1006 15:30:58.922418 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:31:10 crc kubenswrapper[4888]: I1006 15:31:10.927180 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:31:12 crc kubenswrapper[4888]: I1006 15:31:12.008651 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"ec6f62ffb46bdf09d0134dd67ada334f065877ba22326f1c13b3144ad74a18e3"} Oct 06 15:31:12 crc kubenswrapper[4888]: I1006 15:31:12.576096 4888 scope.go:117] "RemoveContainer" containerID="133f018ca8b767a426199661cf8166e6ae225a00f9ac12524fbe84b53ad63d71" Oct 06 15:31:12 crc kubenswrapper[4888]: I1006 15:31:12.624693 4888 scope.go:117] "RemoveContainer" containerID="285ed94dab6f74db9e51accb3da0611b6ae076a173378bf2282347b7aa3787a6" Oct 06 15:31:12 crc kubenswrapper[4888]: I1006 15:31:12.687168 4888 scope.go:117] "RemoveContainer" containerID="df6290499d34c46bcce9be0a770a6105880b511dd603638463d2d0467a94f87f" Oct 06 15:31:27 crc kubenswrapper[4888]: I1006 15:31:27.056375 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-rr4td"] Oct 06 15:31:27 crc kubenswrapper[4888]: I1006 15:31:27.073581 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-rr4td"] Oct 06 15:31:28 crc kubenswrapper[4888]: I1006 15:31:28.935923 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c32a53-487b-42f5-ba2e-6508521a8cc3" path="/var/lib/kubelet/pods/39c32a53-487b-42f5-ba2e-6508521a8cc3/volumes" Oct 06 15:32:12 crc kubenswrapper[4888]: I1006 15:32:12.810928 4888 scope.go:117] "RemoveContainer" containerID="38ef1c46e9473045ad8a31b398487d8a8ade8f4223ae44a7db0e19f5e93d6e53" Oct 06 15:33:32 crc kubenswrapper[4888]: I1006 15:33:32.564297 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:33:32 crc kubenswrapper[4888]: I1006 15:33:32.565119 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:34:02 crc kubenswrapper[4888]: I1006 15:34:02.563925 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:34:02 crc kubenswrapper[4888]: I1006 15:34:02.564691 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:34:32 crc kubenswrapper[4888]: I1006 15:34:32.563884 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:34:32 crc kubenswrapper[4888]: I1006 15:34:32.564661 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:34:32 crc kubenswrapper[4888]: I1006 15:34:32.564721 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:34:32 crc kubenswrapper[4888]: I1006 15:34:32.565663 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec6f62ffb46bdf09d0134dd67ada334f065877ba22326f1c13b3144ad74a18e3"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:34:32 crc kubenswrapper[4888]: I1006 15:34:32.565765 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://ec6f62ffb46bdf09d0134dd67ada334f065877ba22326f1c13b3144ad74a18e3" gracePeriod=600 Oct 06 15:34:33 crc kubenswrapper[4888]: I1006 15:34:33.639466 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="ec6f62ffb46bdf09d0134dd67ada334f065877ba22326f1c13b3144ad74a18e3" exitCode=0 Oct 06 15:34:33 crc kubenswrapper[4888]: I1006 15:34:33.639539 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"ec6f62ffb46bdf09d0134dd67ada334f065877ba22326f1c13b3144ad74a18e3"} Oct 06 15:34:33 crc kubenswrapper[4888]: I1006 15:34:33.639846 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e"} Oct 06 15:34:33 crc kubenswrapper[4888]: I1006 15:34:33.639868 4888 scope.go:117] "RemoveContainer" containerID="a40d21010f886cbacc6b7b125eb7084d4adccf8cabbf0847fb3502d39204a729" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.467125 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6k5ms"] Oct 06 15:35:44 crc kubenswrapper[4888]: E1006 15:35:44.468004 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6751bf31-8e5c-471f-bdbb-1ddd06bcf233" containerName="collect-profiles" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.468018 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="6751bf31-8e5c-471f-bdbb-1ddd06bcf233" containerName="collect-profiles" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.468230 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="6751bf31-8e5c-471f-bdbb-1ddd06bcf233" containerName="collect-profiles" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.469561 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.495160 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6k5ms"] Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.589235 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-utilities\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.589455 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzjmx\" (UniqueName: \"kubernetes.io/projected/eb166cfe-ed08-487c-9cb1-7114fa1365d5-kube-api-access-hzjmx\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.589664 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-catalog-content\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.691046 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-utilities\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.691235 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzjmx\" (UniqueName: \"kubernetes.io/projected/eb166cfe-ed08-487c-9cb1-7114fa1365d5-kube-api-access-hzjmx\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.691310 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-catalog-content\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.691648 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-utilities\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.691661 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-catalog-content\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.730693 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzjmx\" (UniqueName: \"kubernetes.io/projected/eb166cfe-ed08-487c-9cb1-7114fa1365d5-kube-api-access-hzjmx\") pod \"community-operators-6k5ms\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:44 crc kubenswrapper[4888]: I1006 15:35:44.797952 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:45 crc kubenswrapper[4888]: I1006 15:35:45.209427 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6k5ms"] Oct 06 15:35:45 crc kubenswrapper[4888]: I1006 15:35:45.234933 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k5ms" event={"ID":"eb166cfe-ed08-487c-9cb1-7114fa1365d5","Type":"ContainerStarted","Data":"c895e7118dd9fc819b61c8f4cb3e5b10ab5a5f6cc322c377b210223990390fad"} Oct 06 15:35:46 crc kubenswrapper[4888]: I1006 15:35:46.244834 4888 generic.go:334] "Generic (PLEG): container finished" podID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerID="ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592" exitCode=0 Oct 06 15:35:46 crc kubenswrapper[4888]: I1006 15:35:46.244942 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k5ms" event={"ID":"eb166cfe-ed08-487c-9cb1-7114fa1365d5","Type":"ContainerDied","Data":"ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592"} Oct 06 15:35:46 crc kubenswrapper[4888]: I1006 15:35:46.247536 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:35:47 crc kubenswrapper[4888]: I1006 15:35:47.257833 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k5ms" event={"ID":"eb166cfe-ed08-487c-9cb1-7114fa1365d5","Type":"ContainerStarted","Data":"4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18"} Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.257693 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6vj5r"] Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.261506 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.269290 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vj5r"] Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.356115 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-catalog-content\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.356284 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5qtv\" (UniqueName: \"kubernetes.io/projected/1e8d4355-4d1b-4090-81b0-c7870a84ff06-kube-api-access-b5qtv\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.356340 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-utilities\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.457443 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5qtv\" (UniqueName: \"kubernetes.io/projected/1e8d4355-4d1b-4090-81b0-c7870a84ff06-kube-api-access-b5qtv\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.457514 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-utilities\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.457535 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-catalog-content\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.458051 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-utilities\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.458123 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-catalog-content\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.482645 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5qtv\" (UniqueName: \"kubernetes.io/projected/1e8d4355-4d1b-4090-81b0-c7870a84ff06-kube-api-access-b5qtv\") pod \"certified-operators-6vj5r\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:48 crc kubenswrapper[4888]: I1006 15:35:48.583029 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:49 crc kubenswrapper[4888]: I1006 15:35:49.084551 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vj5r"] Oct 06 15:35:49 crc kubenswrapper[4888]: I1006 15:35:49.282304 4888 generic.go:334] "Generic (PLEG): container finished" podID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerID="4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18" exitCode=0 Oct 06 15:35:49 crc kubenswrapper[4888]: I1006 15:35:49.282400 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k5ms" event={"ID":"eb166cfe-ed08-487c-9cb1-7114fa1365d5","Type":"ContainerDied","Data":"4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18"} Oct 06 15:35:49 crc kubenswrapper[4888]: I1006 15:35:49.285033 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vj5r" event={"ID":"1e8d4355-4d1b-4090-81b0-c7870a84ff06","Type":"ContainerStarted","Data":"7de164601ae2d720fa4b911310df052408e87a6d7eb650597443f804d9c96a5b"} Oct 06 15:35:50 crc kubenswrapper[4888]: I1006 15:35:50.296713 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k5ms" event={"ID":"eb166cfe-ed08-487c-9cb1-7114fa1365d5","Type":"ContainerStarted","Data":"78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3"} Oct 06 15:35:50 crc kubenswrapper[4888]: I1006 15:35:50.299141 4888 generic.go:334] "Generic (PLEG): container finished" podID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerID="e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405" exitCode=0 Oct 06 15:35:50 crc kubenswrapper[4888]: I1006 15:35:50.299200 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vj5r" event={"ID":"1e8d4355-4d1b-4090-81b0-c7870a84ff06","Type":"ContainerDied","Data":"e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405"} Oct 06 15:35:50 crc kubenswrapper[4888]: I1006 15:35:50.321019 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6k5ms" podStartSLOduration=2.835181545 podStartE2EDuration="6.321003312s" podCreationTimestamp="2025-10-06 15:35:44 +0000 UTC" firstStartedPulling="2025-10-06 15:35:46.24725236 +0000 UTC m=+2086.059603078" lastFinishedPulling="2025-10-06 15:35:49.733074117 +0000 UTC m=+2089.545424845" observedRunningTime="2025-10-06 15:35:50.3186842 +0000 UTC m=+2090.131034938" watchObservedRunningTime="2025-10-06 15:35:50.321003312 +0000 UTC m=+2090.133354030" Oct 06 15:35:51 crc kubenswrapper[4888]: I1006 15:35:51.307535 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vj5r" event={"ID":"1e8d4355-4d1b-4090-81b0-c7870a84ff06","Type":"ContainerStarted","Data":"682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944"} Oct 06 15:35:54 crc kubenswrapper[4888]: I1006 15:35:54.335564 4888 generic.go:334] "Generic (PLEG): container finished" podID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerID="682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944" exitCode=0 Oct 06 15:35:54 crc kubenswrapper[4888]: I1006 15:35:54.335621 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vj5r" event={"ID":"1e8d4355-4d1b-4090-81b0-c7870a84ff06","Type":"ContainerDied","Data":"682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944"} Oct 06 15:35:54 crc kubenswrapper[4888]: I1006 15:35:54.799010 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:54 crc kubenswrapper[4888]: I1006 15:35:54.799265 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:35:55 crc kubenswrapper[4888]: I1006 15:35:55.345857 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vj5r" event={"ID":"1e8d4355-4d1b-4090-81b0-c7870a84ff06","Type":"ContainerStarted","Data":"95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98"} Oct 06 15:35:55 crc kubenswrapper[4888]: I1006 15:35:55.370999 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6vj5r" podStartSLOduration=2.773158773 podStartE2EDuration="7.370982866s" podCreationTimestamp="2025-10-06 15:35:48 +0000 UTC" firstStartedPulling="2025-10-06 15:35:50.30033237 +0000 UTC m=+2090.112683088" lastFinishedPulling="2025-10-06 15:35:54.898156463 +0000 UTC m=+2094.710507181" observedRunningTime="2025-10-06 15:35:55.367232187 +0000 UTC m=+2095.179582945" watchObservedRunningTime="2025-10-06 15:35:55.370982866 +0000 UTC m=+2095.183333574" Oct 06 15:35:55 crc kubenswrapper[4888]: I1006 15:35:55.846294 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6k5ms" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="registry-server" probeResult="failure" output=< Oct 06 15:35:55 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:35:55 crc kubenswrapper[4888]: > Oct 06 15:35:58 crc kubenswrapper[4888]: I1006 15:35:58.584357 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:58 crc kubenswrapper[4888]: I1006 15:35:58.584647 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:35:59 crc kubenswrapper[4888]: I1006 15:35:59.627488 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6vj5r" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="registry-server" probeResult="failure" output=< Oct 06 15:35:59 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:35:59 crc kubenswrapper[4888]: > Oct 06 15:36:04 crc kubenswrapper[4888]: I1006 15:36:04.847729 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:36:04 crc kubenswrapper[4888]: I1006 15:36:04.896391 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:36:05 crc kubenswrapper[4888]: I1006 15:36:05.084644 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6k5ms"] Oct 06 15:36:06 crc kubenswrapper[4888]: I1006 15:36:06.437650 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6k5ms" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="registry-server" containerID="cri-o://78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3" gracePeriod=2 Oct 06 15:36:06 crc kubenswrapper[4888]: I1006 15:36:06.863075 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:36:06 crc kubenswrapper[4888]: I1006 15:36:06.929952 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzjmx\" (UniqueName: \"kubernetes.io/projected/eb166cfe-ed08-487c-9cb1-7114fa1365d5-kube-api-access-hzjmx\") pod \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " Oct 06 15:36:06 crc kubenswrapper[4888]: I1006 15:36:06.929996 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-utilities\") pod \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " Oct 06 15:36:06 crc kubenswrapper[4888]: I1006 15:36:06.930047 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-catalog-content\") pod \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\" (UID: \"eb166cfe-ed08-487c-9cb1-7114fa1365d5\") " Oct 06 15:36:06 crc kubenswrapper[4888]: I1006 15:36:06.930864 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-utilities" (OuterVolumeSpecName: "utilities") pod "eb166cfe-ed08-487c-9cb1-7114fa1365d5" (UID: "eb166cfe-ed08-487c-9cb1-7114fa1365d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:36:06 crc kubenswrapper[4888]: I1006 15:36:06.935953 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb166cfe-ed08-487c-9cb1-7114fa1365d5-kube-api-access-hzjmx" (OuterVolumeSpecName: "kube-api-access-hzjmx") pod "eb166cfe-ed08-487c-9cb1-7114fa1365d5" (UID: "eb166cfe-ed08-487c-9cb1-7114fa1365d5"). InnerVolumeSpecName "kube-api-access-hzjmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:36:06 crc kubenswrapper[4888]: I1006 15:36:06.973829 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb166cfe-ed08-487c-9cb1-7114fa1365d5" (UID: "eb166cfe-ed08-487c-9cb1-7114fa1365d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.032708 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzjmx\" (UniqueName: \"kubernetes.io/projected/eb166cfe-ed08-487c-9cb1-7114fa1365d5-kube-api-access-hzjmx\") on node \"crc\" DevicePath \"\"" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.032748 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.032761 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb166cfe-ed08-487c-9cb1-7114fa1365d5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.448086 4888 generic.go:334] "Generic (PLEG): container finished" podID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerID="78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3" exitCode=0 Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.448134 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k5ms" event={"ID":"eb166cfe-ed08-487c-9cb1-7114fa1365d5","Type":"ContainerDied","Data":"78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3"} Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.448152 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6k5ms" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.448165 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k5ms" event={"ID":"eb166cfe-ed08-487c-9cb1-7114fa1365d5","Type":"ContainerDied","Data":"c895e7118dd9fc819b61c8f4cb3e5b10ab5a5f6cc322c377b210223990390fad"} Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.448186 4888 scope.go:117] "RemoveContainer" containerID="78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.465269 4888 scope.go:117] "RemoveContainer" containerID="4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.496253 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6k5ms"] Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.498642 4888 scope.go:117] "RemoveContainer" containerID="ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.499921 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6k5ms"] Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.540936 4888 scope.go:117] "RemoveContainer" containerID="78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3" Oct 06 15:36:07 crc kubenswrapper[4888]: E1006 15:36:07.541472 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3\": container with ID starting with 78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3 not found: ID does not exist" containerID="78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.541515 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3"} err="failed to get container status \"78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3\": rpc error: code = NotFound desc = could not find container \"78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3\": container with ID starting with 78ff1f06a353bd3e8aa1440cfc6e1f91170833f8f4f57d143319e568d5e1e8a3 not found: ID does not exist" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.541542 4888 scope.go:117] "RemoveContainer" containerID="4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18" Oct 06 15:36:07 crc kubenswrapper[4888]: E1006 15:36:07.541877 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18\": container with ID starting with 4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18 not found: ID does not exist" containerID="4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.541919 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18"} err="failed to get container status \"4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18\": rpc error: code = NotFound desc = could not find container \"4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18\": container with ID starting with 4d07e26a798ee88e329101b86d676f63b7b4f7a46a8813f3f9ad1035d6291d18 not found: ID does not exist" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.541961 4888 scope.go:117] "RemoveContainer" containerID="ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592" Oct 06 15:36:07 crc kubenswrapper[4888]: E1006 15:36:07.542295 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592\": container with ID starting with ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592 not found: ID does not exist" containerID="ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592" Oct 06 15:36:07 crc kubenswrapper[4888]: I1006 15:36:07.542350 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592"} err="failed to get container status \"ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592\": rpc error: code = NotFound desc = could not find container \"ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592\": container with ID starting with ea5e8604d7b413965019866b32a37d6fb87b2b3eaeba3a1e8003ec44a50c7592 not found: ID does not exist" Oct 06 15:36:08 crc kubenswrapper[4888]: I1006 15:36:08.633230 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:36:08 crc kubenswrapper[4888]: I1006 15:36:08.680676 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:36:08 crc kubenswrapper[4888]: I1006 15:36:08.935826 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" path="/var/lib/kubelet/pods/eb166cfe-ed08-487c-9cb1-7114fa1365d5/volumes" Oct 06 15:36:09 crc kubenswrapper[4888]: I1006 15:36:09.483535 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vj5r"] Oct 06 15:36:10 crc kubenswrapper[4888]: I1006 15:36:10.473892 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6vj5r" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="registry-server" containerID="cri-o://95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98" gracePeriod=2 Oct 06 15:36:10 crc kubenswrapper[4888]: I1006 15:36:10.893114 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.009736 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5qtv\" (UniqueName: \"kubernetes.io/projected/1e8d4355-4d1b-4090-81b0-c7870a84ff06-kube-api-access-b5qtv\") pod \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.010119 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-utilities\") pod \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.010930 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-catalog-content\") pod \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\" (UID: \"1e8d4355-4d1b-4090-81b0-c7870a84ff06\") " Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.010773 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-utilities" (OuterVolumeSpecName: "utilities") pod "1e8d4355-4d1b-4090-81b0-c7870a84ff06" (UID: "1e8d4355-4d1b-4090-81b0-c7870a84ff06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.012023 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.017509 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8d4355-4d1b-4090-81b0-c7870a84ff06-kube-api-access-b5qtv" (OuterVolumeSpecName: "kube-api-access-b5qtv") pod "1e8d4355-4d1b-4090-81b0-c7870a84ff06" (UID: "1e8d4355-4d1b-4090-81b0-c7870a84ff06"). InnerVolumeSpecName "kube-api-access-b5qtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.054839 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e8d4355-4d1b-4090-81b0-c7870a84ff06" (UID: "1e8d4355-4d1b-4090-81b0-c7870a84ff06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.113714 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e8d4355-4d1b-4090-81b0-c7870a84ff06-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.113746 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5qtv\" (UniqueName: \"kubernetes.io/projected/1e8d4355-4d1b-4090-81b0-c7870a84ff06-kube-api-access-b5qtv\") on node \"crc\" DevicePath \"\"" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.483439 4888 generic.go:334] "Generic (PLEG): container finished" podID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerID="95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98" exitCode=0 Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.484002 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vj5r" event={"ID":"1e8d4355-4d1b-4090-81b0-c7870a84ff06","Type":"ContainerDied","Data":"95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98"} Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.484042 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vj5r" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.484508 4888 scope.go:117] "RemoveContainer" containerID="95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.484496 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vj5r" event={"ID":"1e8d4355-4d1b-4090-81b0-c7870a84ff06","Type":"ContainerDied","Data":"7de164601ae2d720fa4b911310df052408e87a6d7eb650597443f804d9c96a5b"} Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.518997 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vj5r"] Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.526528 4888 scope.go:117] "RemoveContainer" containerID="682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.539623 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6vj5r"] Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.555407 4888 scope.go:117] "RemoveContainer" containerID="e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.592192 4888 scope.go:117] "RemoveContainer" containerID="95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98" Oct 06 15:36:11 crc kubenswrapper[4888]: E1006 15:36:11.592609 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98\": container with ID starting with 95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98 not found: ID does not exist" containerID="95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.592640 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98"} err="failed to get container status \"95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98\": rpc error: code = NotFound desc = could not find container \"95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98\": container with ID starting with 95533bd2eb0847c23b09a2c6ea0b1f7d2e663f7c27bd3db44663b60de448fc98 not found: ID does not exist" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.592666 4888 scope.go:117] "RemoveContainer" containerID="682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944" Oct 06 15:36:11 crc kubenswrapper[4888]: E1006 15:36:11.593100 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944\": container with ID starting with 682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944 not found: ID does not exist" containerID="682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.593145 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944"} err="failed to get container status \"682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944\": rpc error: code = NotFound desc = could not find container \"682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944\": container with ID starting with 682fe25dc602325364ff94e0edbda5251ff2ddccead6ec5ab86d39cdcdd94944 not found: ID does not exist" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.593166 4888 scope.go:117] "RemoveContainer" containerID="e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405" Oct 06 15:36:11 crc kubenswrapper[4888]: E1006 15:36:11.593637 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405\": container with ID starting with e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405 not found: ID does not exist" containerID="e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405" Oct 06 15:36:11 crc kubenswrapper[4888]: I1006 15:36:11.593665 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405"} err="failed to get container status \"e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405\": rpc error: code = NotFound desc = could not find container \"e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405\": container with ID starting with e97cd592229384796553c41c22a3f2e0534f45f0a911d2863ecf545bf564d405 not found: ID does not exist" Oct 06 15:36:12 crc kubenswrapper[4888]: I1006 15:36:12.945011 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" path="/var/lib/kubelet/pods/1e8d4355-4d1b-4090-81b0-c7870a84ff06/volumes" Oct 06 15:36:32 crc kubenswrapper[4888]: I1006 15:36:32.564300 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:36:32 crc kubenswrapper[4888]: I1006 15:36:32.564905 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:37:02 crc kubenswrapper[4888]: I1006 15:37:02.563905 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:37:02 crc kubenswrapper[4888]: I1006 15:37:02.564477 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:37:32 crc kubenswrapper[4888]: I1006 15:37:32.563620 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:37:32 crc kubenswrapper[4888]: I1006 15:37:32.564267 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:37:32 crc kubenswrapper[4888]: I1006 15:37:32.564323 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:37:32 crc kubenswrapper[4888]: I1006 15:37:32.565209 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:37:32 crc kubenswrapper[4888]: I1006 15:37:32.565278 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" gracePeriod=600 Oct 06 15:37:32 crc kubenswrapper[4888]: E1006 15:37:32.692670 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:37:33 crc kubenswrapper[4888]: I1006 15:37:33.184971 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" exitCode=0 Oct 06 15:37:33 crc kubenswrapper[4888]: I1006 15:37:33.185027 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e"} Oct 06 15:37:33 crc kubenswrapper[4888]: I1006 15:37:33.185059 4888 scope.go:117] "RemoveContainer" containerID="ec6f62ffb46bdf09d0134dd67ada334f065877ba22326f1c13b3144ad74a18e3" Oct 06 15:37:33 crc kubenswrapper[4888]: I1006 15:37:33.185664 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:37:33 crc kubenswrapper[4888]: E1006 15:37:33.185943 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:37:45 crc kubenswrapper[4888]: I1006 15:37:45.921765 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:37:45 crc kubenswrapper[4888]: E1006 15:37:45.922433 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:37:59 crc kubenswrapper[4888]: I1006 15:37:59.921654 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:37:59 crc kubenswrapper[4888]: E1006 15:37:59.922575 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:38:13 crc kubenswrapper[4888]: I1006 15:38:13.921596 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:38:13 crc kubenswrapper[4888]: E1006 15:38:13.922185 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:38:26 crc kubenswrapper[4888]: I1006 15:38:26.922055 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:38:26 crc kubenswrapper[4888]: E1006 15:38:26.923856 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:38:41 crc kubenswrapper[4888]: I1006 15:38:41.921723 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:38:41 crc kubenswrapper[4888]: E1006 15:38:41.923151 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:38:53 crc kubenswrapper[4888]: I1006 15:38:53.922016 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:38:53 crc kubenswrapper[4888]: E1006 15:38:53.923037 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:39:08 crc kubenswrapper[4888]: I1006 15:39:08.921988 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:39:08 crc kubenswrapper[4888]: E1006 15:39:08.922717 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:39:23 crc kubenswrapper[4888]: I1006 15:39:23.921251 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:39:23 crc kubenswrapper[4888]: E1006 15:39:23.922017 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:39:37 crc kubenswrapper[4888]: I1006 15:39:37.927638 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:39:37 crc kubenswrapper[4888]: E1006 15:39:37.928462 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:39:51 crc kubenswrapper[4888]: I1006 15:39:51.921780 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:39:51 crc kubenswrapper[4888]: E1006 15:39:51.922686 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:40:06 crc kubenswrapper[4888]: I1006 15:40:06.921317 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:40:06 crc kubenswrapper[4888]: E1006 15:40:06.922203 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.528472 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cj6sx"] Oct 06 15:40:15 crc kubenswrapper[4888]: E1006 15:40:15.530756 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="extract-utilities" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.530855 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="extract-utilities" Oct 06 15:40:15 crc kubenswrapper[4888]: E1006 15:40:15.530913 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="registry-server" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.530965 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="registry-server" Oct 06 15:40:15 crc kubenswrapper[4888]: E1006 15:40:15.531052 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="extract-utilities" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.531105 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="extract-utilities" Oct 06 15:40:15 crc kubenswrapper[4888]: E1006 15:40:15.531172 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="extract-content" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.531223 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="extract-content" Oct 06 15:40:15 crc kubenswrapper[4888]: E1006 15:40:15.531286 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="registry-server" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.531337 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="registry-server" Oct 06 15:40:15 crc kubenswrapper[4888]: E1006 15:40:15.531405 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="extract-content" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.531461 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="extract-content" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.531682 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8d4355-4d1b-4090-81b0-c7870a84ff06" containerName="registry-server" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.531761 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb166cfe-ed08-487c-9cb1-7114fa1365d5" containerName="registry-server" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.533112 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.573253 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-catalog-content\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.573539 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-utilities\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.573677 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlxzx\" (UniqueName: \"kubernetes.io/projected/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-kube-api-access-mlxzx\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.619367 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cj6sx"] Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.675587 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-catalog-content\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.675719 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-utilities\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.675791 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlxzx\" (UniqueName: \"kubernetes.io/projected/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-kube-api-access-mlxzx\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.676602 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-catalog-content\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.676936 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-utilities\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.697867 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlxzx\" (UniqueName: \"kubernetes.io/projected/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-kube-api-access-mlxzx\") pod \"redhat-marketplace-cj6sx\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:15 crc kubenswrapper[4888]: I1006 15:40:15.858986 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:16 crc kubenswrapper[4888]: I1006 15:40:16.335997 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cj6sx"] Oct 06 15:40:16 crc kubenswrapper[4888]: I1006 15:40:16.551158 4888 generic.go:334] "Generic (PLEG): container finished" podID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerID="f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57" exitCode=0 Oct 06 15:40:16 crc kubenswrapper[4888]: I1006 15:40:16.551200 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj6sx" event={"ID":"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1","Type":"ContainerDied","Data":"f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57"} Oct 06 15:40:16 crc kubenswrapper[4888]: I1006 15:40:16.551227 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj6sx" event={"ID":"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1","Type":"ContainerStarted","Data":"67c8c53bea28fb941bb09d7d6fabf1318908dd3861d6eb38cbb1d52c7b0732a9"} Oct 06 15:40:17 crc kubenswrapper[4888]: I1006 15:40:17.565043 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj6sx" event={"ID":"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1","Type":"ContainerStarted","Data":"84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22"} Oct 06 15:40:18 crc kubenswrapper[4888]: I1006 15:40:18.574422 4888 generic.go:334] "Generic (PLEG): container finished" podID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerID="84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22" exitCode=0 Oct 06 15:40:18 crc kubenswrapper[4888]: I1006 15:40:18.574523 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj6sx" event={"ID":"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1","Type":"ContainerDied","Data":"84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22"} Oct 06 15:40:19 crc kubenswrapper[4888]: I1006 15:40:19.585905 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj6sx" event={"ID":"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1","Type":"ContainerStarted","Data":"9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65"} Oct 06 15:40:19 crc kubenswrapper[4888]: I1006 15:40:19.604031 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cj6sx" podStartSLOduration=2.05687316 podStartE2EDuration="4.604012446s" podCreationTimestamp="2025-10-06 15:40:15 +0000 UTC" firstStartedPulling="2025-10-06 15:40:16.552972528 +0000 UTC m=+2356.365323246" lastFinishedPulling="2025-10-06 15:40:19.100111814 +0000 UTC m=+2358.912462532" observedRunningTime="2025-10-06 15:40:19.601883059 +0000 UTC m=+2359.414233787" watchObservedRunningTime="2025-10-06 15:40:19.604012446 +0000 UTC m=+2359.416363164" Oct 06 15:40:21 crc kubenswrapper[4888]: I1006 15:40:21.922186 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:40:21 crc kubenswrapper[4888]: E1006 15:40:21.922861 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.383834 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cr228"] Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.386490 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.394462 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cr228"] Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.445331 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc9xd\" (UniqueName: \"kubernetes.io/projected/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-kube-api-access-cc9xd\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.445399 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-catalog-content\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.445453 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-utilities\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.547381 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-utilities\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.547582 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc9xd\" (UniqueName: \"kubernetes.io/projected/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-kube-api-access-cc9xd\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.547648 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-catalog-content\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.548010 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-utilities\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.548103 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-catalog-content\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.576967 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc9xd\" (UniqueName: \"kubernetes.io/projected/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-kube-api-access-cc9xd\") pod \"redhat-operators-cr228\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:23 crc kubenswrapper[4888]: I1006 15:40:23.708239 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:24 crc kubenswrapper[4888]: I1006 15:40:24.202740 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cr228"] Oct 06 15:40:24 crc kubenswrapper[4888]: W1006 15:40:24.210552 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b1f5131_a9e5_4f64_aa78_88016bfa1fcd.slice/crio-106521c59aab04828c4c1c2adca9f17184ec6c9778ddf2059045c09948b8b9f3 WatchSource:0}: Error finding container 106521c59aab04828c4c1c2adca9f17184ec6c9778ddf2059045c09948b8b9f3: Status 404 returned error can't find the container with id 106521c59aab04828c4c1c2adca9f17184ec6c9778ddf2059045c09948b8b9f3 Oct 06 15:40:24 crc kubenswrapper[4888]: I1006 15:40:24.630108 4888 generic.go:334] "Generic (PLEG): container finished" podID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerID="f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83" exitCode=0 Oct 06 15:40:24 crc kubenswrapper[4888]: I1006 15:40:24.630159 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr228" event={"ID":"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd","Type":"ContainerDied","Data":"f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83"} Oct 06 15:40:24 crc kubenswrapper[4888]: I1006 15:40:24.630189 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr228" event={"ID":"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd","Type":"ContainerStarted","Data":"106521c59aab04828c4c1c2adca9f17184ec6c9778ddf2059045c09948b8b9f3"} Oct 06 15:40:25 crc kubenswrapper[4888]: I1006 15:40:25.859450 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:25 crc kubenswrapper[4888]: I1006 15:40:25.860405 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:25 crc kubenswrapper[4888]: I1006 15:40:25.914351 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:26 crc kubenswrapper[4888]: I1006 15:40:26.658090 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr228" event={"ID":"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd","Type":"ContainerStarted","Data":"915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1"} Oct 06 15:40:26 crc kubenswrapper[4888]: I1006 15:40:26.718266 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:28 crc kubenswrapper[4888]: I1006 15:40:28.156899 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cj6sx"] Oct 06 15:40:28 crc kubenswrapper[4888]: I1006 15:40:28.673853 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cj6sx" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerName="registry-server" containerID="cri-o://9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65" gracePeriod=2 Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.161618 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.260861 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-catalog-content\") pod \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.260939 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlxzx\" (UniqueName: \"kubernetes.io/projected/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-kube-api-access-mlxzx\") pod \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.261062 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-utilities\") pod \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\" (UID: \"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1\") " Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.261672 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-utilities" (OuterVolumeSpecName: "utilities") pod "5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" (UID: "5e329c3f-abe3-4cf7-8ddd-96c89f07cab1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.266403 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-kube-api-access-mlxzx" (OuterVolumeSpecName: "kube-api-access-mlxzx") pod "5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" (UID: "5e329c3f-abe3-4cf7-8ddd-96c89f07cab1"). InnerVolumeSpecName "kube-api-access-mlxzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.273720 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" (UID: "5e329c3f-abe3-4cf7-8ddd-96c89f07cab1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.363597 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.363897 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlxzx\" (UniqueName: \"kubernetes.io/projected/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-kube-api-access-mlxzx\") on node \"crc\" DevicePath \"\"" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.363911 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.683492 4888 generic.go:334] "Generic (PLEG): container finished" podID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerID="9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65" exitCode=0 Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.683539 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj6sx" event={"ID":"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1","Type":"ContainerDied","Data":"9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65"} Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.683549 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cj6sx" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.683575 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cj6sx" event={"ID":"5e329c3f-abe3-4cf7-8ddd-96c89f07cab1","Type":"ContainerDied","Data":"67c8c53bea28fb941bb09d7d6fabf1318908dd3861d6eb38cbb1d52c7b0732a9"} Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.683597 4888 scope.go:117] "RemoveContainer" containerID="9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.705771 4888 scope.go:117] "RemoveContainer" containerID="84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.726161 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cj6sx"] Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.734049 4888 scope.go:117] "RemoveContainer" containerID="f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.741184 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cj6sx"] Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.782752 4888 scope.go:117] "RemoveContainer" containerID="9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65" Oct 06 15:40:29 crc kubenswrapper[4888]: E1006 15:40:29.783539 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65\": container with ID starting with 9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65 not found: ID does not exist" containerID="9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.783570 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65"} err="failed to get container status \"9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65\": rpc error: code = NotFound desc = could not find container \"9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65\": container with ID starting with 9dd692db01926fb42b855d629b7442b01b3548a096d8dfa54002abba17950e65 not found: ID does not exist" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.783592 4888 scope.go:117] "RemoveContainer" containerID="84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22" Oct 06 15:40:29 crc kubenswrapper[4888]: E1006 15:40:29.783977 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22\": container with ID starting with 84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22 not found: ID does not exist" containerID="84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.784003 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22"} err="failed to get container status \"84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22\": rpc error: code = NotFound desc = could not find container \"84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22\": container with ID starting with 84278eb8e154717f7e9e6b56a203bdd111cdfba85b030419677957fe91669c22 not found: ID does not exist" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.784090 4888 scope.go:117] "RemoveContainer" containerID="f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57" Oct 06 15:40:29 crc kubenswrapper[4888]: E1006 15:40:29.784456 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57\": container with ID starting with f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57 not found: ID does not exist" containerID="f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57" Oct 06 15:40:29 crc kubenswrapper[4888]: I1006 15:40:29.784476 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57"} err="failed to get container status \"f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57\": rpc error: code = NotFound desc = could not find container \"f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57\": container with ID starting with f4aa1b74d65cbcc302c38bc3dfa57b08c702a4ebe6e6368afd1471a74b3bbf57 not found: ID does not exist" Oct 06 15:40:30 crc kubenswrapper[4888]: I1006 15:40:30.932461 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" path="/var/lib/kubelet/pods/5e329c3f-abe3-4cf7-8ddd-96c89f07cab1/volumes" Oct 06 15:40:31 crc kubenswrapper[4888]: I1006 15:40:31.703837 4888 generic.go:334] "Generic (PLEG): container finished" podID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerID="915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1" exitCode=0 Oct 06 15:40:31 crc kubenswrapper[4888]: I1006 15:40:31.703933 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr228" event={"ID":"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd","Type":"ContainerDied","Data":"915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1"} Oct 06 15:40:32 crc kubenswrapper[4888]: I1006 15:40:32.713018 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr228" event={"ID":"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd","Type":"ContainerStarted","Data":"d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25"} Oct 06 15:40:32 crc kubenswrapper[4888]: I1006 15:40:32.739464 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cr228" podStartSLOduration=2.043896011 podStartE2EDuration="9.739442585s" podCreationTimestamp="2025-10-06 15:40:23 +0000 UTC" firstStartedPulling="2025-10-06 15:40:24.632086464 +0000 UTC m=+2364.444437182" lastFinishedPulling="2025-10-06 15:40:32.327633038 +0000 UTC m=+2372.139983756" observedRunningTime="2025-10-06 15:40:32.731822317 +0000 UTC m=+2372.544173055" watchObservedRunningTime="2025-10-06 15:40:32.739442585 +0000 UTC m=+2372.551793303" Oct 06 15:40:33 crc kubenswrapper[4888]: I1006 15:40:33.708360 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:33 crc kubenswrapper[4888]: I1006 15:40:33.708636 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:40:34 crc kubenswrapper[4888]: I1006 15:40:34.762146 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cr228" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="registry-server" probeResult="failure" output=< Oct 06 15:40:34 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:40:34 crc kubenswrapper[4888]: > Oct 06 15:40:35 crc kubenswrapper[4888]: I1006 15:40:35.921460 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:40:35 crc kubenswrapper[4888]: E1006 15:40:35.922540 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:40:44 crc kubenswrapper[4888]: I1006 15:40:44.765355 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cr228" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="registry-server" probeResult="failure" output=< Oct 06 15:40:44 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:40:44 crc kubenswrapper[4888]: > Oct 06 15:40:48 crc kubenswrapper[4888]: I1006 15:40:48.922358 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:40:48 crc kubenswrapper[4888]: E1006 15:40:48.922964 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:40:54 crc kubenswrapper[4888]: I1006 15:40:54.758078 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cr228" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="registry-server" probeResult="failure" output=< Oct 06 15:40:54 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:40:54 crc kubenswrapper[4888]: > Oct 06 15:41:00 crc kubenswrapper[4888]: I1006 15:41:00.932046 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:41:00 crc kubenswrapper[4888]: E1006 15:41:00.933314 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:41:03 crc kubenswrapper[4888]: I1006 15:41:03.756599 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:41:03 crc kubenswrapper[4888]: I1006 15:41:03.814471 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:41:03 crc kubenswrapper[4888]: I1006 15:41:03.997957 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cr228"] Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.012298 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cr228" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="registry-server" containerID="cri-o://d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25" gracePeriod=2 Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.455114 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.556041 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-catalog-content\") pod \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.556177 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc9xd\" (UniqueName: \"kubernetes.io/projected/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-kube-api-access-cc9xd\") pod \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.556198 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-utilities\") pod \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\" (UID: \"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd\") " Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.557196 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-utilities" (OuterVolumeSpecName: "utilities") pod "9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" (UID: "9b1f5131-a9e5-4f64-aa78-88016bfa1fcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.562273 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-kube-api-access-cc9xd" (OuterVolumeSpecName: "kube-api-access-cc9xd") pod "9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" (UID: "9b1f5131-a9e5-4f64-aa78-88016bfa1fcd"). InnerVolumeSpecName "kube-api-access-cc9xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.645887 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" (UID: "9b1f5131-a9e5-4f64-aa78-88016bfa1fcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.658356 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.658461 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc9xd\" (UniqueName: \"kubernetes.io/projected/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-kube-api-access-cc9xd\") on node \"crc\" DevicePath \"\"" Oct 06 15:41:05 crc kubenswrapper[4888]: I1006 15:41:05.658482 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.025599 4888 generic.go:334] "Generic (PLEG): container finished" podID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerID="d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25" exitCode=0 Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.025654 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr228" event={"ID":"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd","Type":"ContainerDied","Data":"d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25"} Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.025687 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr228" event={"ID":"9b1f5131-a9e5-4f64-aa78-88016bfa1fcd","Type":"ContainerDied","Data":"106521c59aab04828c4c1c2adca9f17184ec6c9778ddf2059045c09948b8b9f3"} Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.025707 4888 scope.go:117] "RemoveContainer" containerID="d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.025935 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr228" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.050147 4888 scope.go:117] "RemoveContainer" containerID="915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.068745 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cr228"] Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.076224 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cr228"] Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.097677 4888 scope.go:117] "RemoveContainer" containerID="f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.135617 4888 scope.go:117] "RemoveContainer" containerID="d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25" Oct 06 15:41:06 crc kubenswrapper[4888]: E1006 15:41:06.136231 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25\": container with ID starting with d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25 not found: ID does not exist" containerID="d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.136378 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25"} err="failed to get container status \"d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25\": rpc error: code = NotFound desc = could not find container \"d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25\": container with ID starting with d7965f7fcd012873872929861e3947654f89954f27fa475cae69a035c9f32e25 not found: ID does not exist" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.136487 4888 scope.go:117] "RemoveContainer" containerID="915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1" Oct 06 15:41:06 crc kubenswrapper[4888]: E1006 15:41:06.137194 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1\": container with ID starting with 915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1 not found: ID does not exist" containerID="915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.137305 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1"} err="failed to get container status \"915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1\": rpc error: code = NotFound desc = could not find container \"915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1\": container with ID starting with 915a58daa40c1a4b5ecc599fcefe1d7b1af49391dda7e8ae8f03ded606007ab1 not found: ID does not exist" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.137509 4888 scope.go:117] "RemoveContainer" containerID="f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83" Oct 06 15:41:06 crc kubenswrapper[4888]: E1006 15:41:06.137872 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83\": container with ID starting with f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83 not found: ID does not exist" containerID="f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.137898 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83"} err="failed to get container status \"f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83\": rpc error: code = NotFound desc = could not find container \"f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83\": container with ID starting with f969d7b51aaaa1ec2715fb7ee93309ade6c37b7c274ccb860e05defd5c961c83 not found: ID does not exist" Oct 06 15:41:06 crc kubenswrapper[4888]: I1006 15:41:06.935141 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" path="/var/lib/kubelet/pods/9b1f5131-a9e5-4f64-aa78-88016bfa1fcd/volumes" Oct 06 15:41:15 crc kubenswrapper[4888]: I1006 15:41:15.921044 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:41:15 crc kubenswrapper[4888]: E1006 15:41:15.921780 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:41:26 crc kubenswrapper[4888]: I1006 15:41:26.921305 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:41:26 crc kubenswrapper[4888]: E1006 15:41:26.922266 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:41:37 crc kubenswrapper[4888]: I1006 15:41:37.921932 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:41:37 crc kubenswrapper[4888]: E1006 15:41:37.923172 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:41:49 crc kubenswrapper[4888]: I1006 15:41:49.921880 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:41:49 crc kubenswrapper[4888]: E1006 15:41:49.922590 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:42:02 crc kubenswrapper[4888]: I1006 15:42:02.923537 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:42:02 crc kubenswrapper[4888]: E1006 15:42:02.924792 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:42:13 crc kubenswrapper[4888]: I1006 15:42:13.921229 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:42:13 crc kubenswrapper[4888]: E1006 15:42:13.922217 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:42:28 crc kubenswrapper[4888]: I1006 15:42:28.921884 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:42:28 crc kubenswrapper[4888]: E1006 15:42:28.922939 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:42:40 crc kubenswrapper[4888]: I1006 15:42:40.928064 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:42:41 crc kubenswrapper[4888]: I1006 15:42:41.820255 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"b457172481932fb01bcb9b352e889ca6a84c3f8759b7a9a28309ef990aea6469"} Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.154773 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv"] Oct 06 15:45:00 crc kubenswrapper[4888]: E1006 15:45:00.156484 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerName="extract-utilities" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.156582 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerName="extract-utilities" Oct 06 15:45:00 crc kubenswrapper[4888]: E1006 15:45:00.156663 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="extract-content" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.156717 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="extract-content" Oct 06 15:45:00 crc kubenswrapper[4888]: E1006 15:45:00.156774 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="registry-server" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.156873 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="registry-server" Oct 06 15:45:00 crc kubenswrapper[4888]: E1006 15:45:00.156959 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="extract-utilities" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.157020 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="extract-utilities" Oct 06 15:45:00 crc kubenswrapper[4888]: E1006 15:45:00.157094 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerName="registry-server" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.157167 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerName="registry-server" Oct 06 15:45:00 crc kubenswrapper[4888]: E1006 15:45:00.157231 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerName="extract-content" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.157285 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerName="extract-content" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.157502 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e329c3f-abe3-4cf7-8ddd-96c89f07cab1" containerName="registry-server" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.157590 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1f5131-a9e5-4f64-aa78-88016bfa1fcd" containerName="registry-server" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.158386 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.161614 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.161903 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.171707 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv"] Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.280570 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9fcd42d0-d922-4a79-91e3-3d86c91def6f-secret-volume\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.280910 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fcd42d0-d922-4a79-91e3-3d86c91def6f-config-volume\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.280978 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g674t\" (UniqueName: \"kubernetes.io/projected/9fcd42d0-d922-4a79-91e3-3d86c91def6f-kube-api-access-g674t\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.382940 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g674t\" (UniqueName: \"kubernetes.io/projected/9fcd42d0-d922-4a79-91e3-3d86c91def6f-kube-api-access-g674t\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.383095 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9fcd42d0-d922-4a79-91e3-3d86c91def6f-secret-volume\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.383175 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fcd42d0-d922-4a79-91e3-3d86c91def6f-config-volume\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.384150 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fcd42d0-d922-4a79-91e3-3d86c91def6f-config-volume\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.391110 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9fcd42d0-d922-4a79-91e3-3d86c91def6f-secret-volume\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.400856 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g674t\" (UniqueName: \"kubernetes.io/projected/9fcd42d0-d922-4a79-91e3-3d86c91def6f-kube-api-access-g674t\") pod \"collect-profiles-29329425-8zwcv\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.497195 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:00 crc kubenswrapper[4888]: I1006 15:45:00.974767 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv"] Oct 06 15:45:01 crc kubenswrapper[4888]: I1006 15:45:01.930439 4888 generic.go:334] "Generic (PLEG): container finished" podID="9fcd42d0-d922-4a79-91e3-3d86c91def6f" containerID="08d701dd48d26b82a047684d8fddef8e2aaa066e72b23f5b64fcfecf80edd07c" exitCode=0 Oct 06 15:45:01 crc kubenswrapper[4888]: I1006 15:45:01.930806 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" event={"ID":"9fcd42d0-d922-4a79-91e3-3d86c91def6f","Type":"ContainerDied","Data":"08d701dd48d26b82a047684d8fddef8e2aaa066e72b23f5b64fcfecf80edd07c"} Oct 06 15:45:01 crc kubenswrapper[4888]: I1006 15:45:01.930856 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" event={"ID":"9fcd42d0-d922-4a79-91e3-3d86c91def6f","Type":"ContainerStarted","Data":"fd9e607cb87e0514d2764ee8cfa8ab02b9f62257c698697a49bac5ab58e9cc17"} Oct 06 15:45:02 crc kubenswrapper[4888]: I1006 15:45:02.563358 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:45:02 crc kubenswrapper[4888]: I1006 15:45:02.563415 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.273071 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.442296 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9fcd42d0-d922-4a79-91e3-3d86c91def6f-secret-volume\") pod \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.442643 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fcd42d0-d922-4a79-91e3-3d86c91def6f-config-volume\") pod \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.442906 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g674t\" (UniqueName: \"kubernetes.io/projected/9fcd42d0-d922-4a79-91e3-3d86c91def6f-kube-api-access-g674t\") pod \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\" (UID: \"9fcd42d0-d922-4a79-91e3-3d86c91def6f\") " Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.444137 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fcd42d0-d922-4a79-91e3-3d86c91def6f-config-volume" (OuterVolumeSpecName: "config-volume") pod "9fcd42d0-d922-4a79-91e3-3d86c91def6f" (UID: "9fcd42d0-d922-4a79-91e3-3d86c91def6f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.447990 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fcd42d0-d922-4a79-91e3-3d86c91def6f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9fcd42d0-d922-4a79-91e3-3d86c91def6f" (UID: "9fcd42d0-d922-4a79-91e3-3d86c91def6f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.449841 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fcd42d0-d922-4a79-91e3-3d86c91def6f-kube-api-access-g674t" (OuterVolumeSpecName: "kube-api-access-g674t") pod "9fcd42d0-d922-4a79-91e3-3d86c91def6f" (UID: "9fcd42d0-d922-4a79-91e3-3d86c91def6f"). InnerVolumeSpecName "kube-api-access-g674t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.545177 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g674t\" (UniqueName: \"kubernetes.io/projected/9fcd42d0-d922-4a79-91e3-3d86c91def6f-kube-api-access-g674t\") on node \"crc\" DevicePath \"\"" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.545231 4888 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9fcd42d0-d922-4a79-91e3-3d86c91def6f-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.545251 4888 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fcd42d0-d922-4a79-91e3-3d86c91def6f-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.947249 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" event={"ID":"9fcd42d0-d922-4a79-91e3-3d86c91def6f","Type":"ContainerDied","Data":"fd9e607cb87e0514d2764ee8cfa8ab02b9f62257c698697a49bac5ab58e9cc17"} Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.947289 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd9e607cb87e0514d2764ee8cfa8ab02b9f62257c698697a49bac5ab58e9cc17" Oct 06 15:45:03 crc kubenswrapper[4888]: I1006 15:45:03.947344 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv" Oct 06 15:45:04 crc kubenswrapper[4888]: I1006 15:45:04.347725 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887"] Oct 06 15:45:04 crc kubenswrapper[4888]: I1006 15:45:04.356167 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329380-hq887"] Oct 06 15:45:04 crc kubenswrapper[4888]: I1006 15:45:04.935432 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ca1572-99ce-4516-96ae-1a9772e4cb35" path="/var/lib/kubelet/pods/e9ca1572-99ce-4516-96ae-1a9772e4cb35/volumes" Oct 06 15:45:13 crc kubenswrapper[4888]: I1006 15:45:13.158917 4888 scope.go:117] "RemoveContainer" containerID="7b27f205f38afd17a23e23c504b8cac7559446dd5eb35f7d02d860147cb9ea46" Oct 06 15:45:32 crc kubenswrapper[4888]: I1006 15:45:32.563324 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:45:32 crc kubenswrapper[4888]: I1006 15:45:32.563876 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:46:02 crc kubenswrapper[4888]: I1006 15:46:02.563977 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:46:02 crc kubenswrapper[4888]: I1006 15:46:02.565855 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:46:02 crc kubenswrapper[4888]: I1006 15:46:02.565998 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:46:02 crc kubenswrapper[4888]: I1006 15:46:02.566925 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b457172481932fb01bcb9b352e889ca6a84c3f8759b7a9a28309ef990aea6469"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:46:02 crc kubenswrapper[4888]: I1006 15:46:02.567078 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://b457172481932fb01bcb9b352e889ca6a84c3f8759b7a9a28309ef990aea6469" gracePeriod=600 Oct 06 15:46:03 crc kubenswrapper[4888]: I1006 15:46:03.423630 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="b457172481932fb01bcb9b352e889ca6a84c3f8759b7a9a28309ef990aea6469" exitCode=0 Oct 06 15:46:03 crc kubenswrapper[4888]: I1006 15:46:03.423727 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"b457172481932fb01bcb9b352e889ca6a84c3f8759b7a9a28309ef990aea6469"} Oct 06 15:46:03 crc kubenswrapper[4888]: I1006 15:46:03.424536 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce"} Oct 06 15:46:03 crc kubenswrapper[4888]: I1006 15:46:03.424574 4888 scope.go:117] "RemoveContainer" containerID="a70d041b22fa28b9088b812fb393bc66dfc97534063a19fb84ef63364a8f760e" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.365217 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kzpw9"] Oct 06 15:46:33 crc kubenswrapper[4888]: E1006 15:46:33.367643 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fcd42d0-d922-4a79-91e3-3d86c91def6f" containerName="collect-profiles" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.367823 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fcd42d0-d922-4a79-91e3-3d86c91def6f" containerName="collect-profiles" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.368292 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fcd42d0-d922-4a79-91e3-3d86c91def6f" containerName="collect-profiles" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.370597 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.376216 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzpw9"] Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.458838 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-utilities\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.458923 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzqfp\" (UniqueName: \"kubernetes.io/projected/1b31ea94-a2d8-484f-9b95-c474e6573924-kube-api-access-qzqfp\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.458966 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-catalog-content\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.560220 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-utilities\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.560305 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzqfp\" (UniqueName: \"kubernetes.io/projected/1b31ea94-a2d8-484f-9b95-c474e6573924-kube-api-access-qzqfp\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.560337 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-catalog-content\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.560778 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-utilities\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.560841 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-catalog-content\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.579375 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzqfp\" (UniqueName: \"kubernetes.io/projected/1b31ea94-a2d8-484f-9b95-c474e6573924-kube-api-access-qzqfp\") pod \"certified-operators-kzpw9\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:33 crc kubenswrapper[4888]: I1006 15:46:33.691441 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:34 crc kubenswrapper[4888]: I1006 15:46:34.270616 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kzpw9"] Oct 06 15:46:34 crc kubenswrapper[4888]: W1006 15:46:34.276959 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b31ea94_a2d8_484f_9b95_c474e6573924.slice/crio-e81f064b292510f85ae32121a3504a100b49c7fb6497129578abcf6ab1d1b0b9 WatchSource:0}: Error finding container e81f064b292510f85ae32121a3504a100b49c7fb6497129578abcf6ab1d1b0b9: Status 404 returned error can't find the container with id e81f064b292510f85ae32121a3504a100b49c7fb6497129578abcf6ab1d1b0b9 Oct 06 15:46:34 crc kubenswrapper[4888]: I1006 15:46:34.680649 4888 generic.go:334] "Generic (PLEG): container finished" podID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerID="449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34" exitCode=0 Oct 06 15:46:34 crc kubenswrapper[4888]: I1006 15:46:34.680688 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzpw9" event={"ID":"1b31ea94-a2d8-484f-9b95-c474e6573924","Type":"ContainerDied","Data":"449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34"} Oct 06 15:46:34 crc kubenswrapper[4888]: I1006 15:46:34.680709 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzpw9" event={"ID":"1b31ea94-a2d8-484f-9b95-c474e6573924","Type":"ContainerStarted","Data":"e81f064b292510f85ae32121a3504a100b49c7fb6497129578abcf6ab1d1b0b9"} Oct 06 15:46:34 crc kubenswrapper[4888]: I1006 15:46:34.683540 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:46:35 crc kubenswrapper[4888]: I1006 15:46:35.691177 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzpw9" event={"ID":"1b31ea94-a2d8-484f-9b95-c474e6573924","Type":"ContainerStarted","Data":"30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc"} Oct 06 15:46:37 crc kubenswrapper[4888]: I1006 15:46:37.705981 4888 generic.go:334] "Generic (PLEG): container finished" podID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerID="30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc" exitCode=0 Oct 06 15:46:37 crc kubenswrapper[4888]: I1006 15:46:37.706066 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzpw9" event={"ID":"1b31ea94-a2d8-484f-9b95-c474e6573924","Type":"ContainerDied","Data":"30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc"} Oct 06 15:46:38 crc kubenswrapper[4888]: I1006 15:46:38.716827 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzpw9" event={"ID":"1b31ea94-a2d8-484f-9b95-c474e6573924","Type":"ContainerStarted","Data":"33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5"} Oct 06 15:46:38 crc kubenswrapper[4888]: I1006 15:46:38.742264 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kzpw9" podStartSLOduration=2.314876084 podStartE2EDuration="5.742248831s" podCreationTimestamp="2025-10-06 15:46:33 +0000 UTC" firstStartedPulling="2025-10-06 15:46:34.683350683 +0000 UTC m=+2734.495701401" lastFinishedPulling="2025-10-06 15:46:38.11072343 +0000 UTC m=+2737.923074148" observedRunningTime="2025-10-06 15:46:38.741606241 +0000 UTC m=+2738.553956969" watchObservedRunningTime="2025-10-06 15:46:38.742248831 +0000 UTC m=+2738.554599549" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.536491 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9f28"] Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.540101 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.558769 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9f28"] Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.669770 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-utilities\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.669854 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-catalog-content\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.669965 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fnws\" (UniqueName: \"kubernetes.io/projected/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-kube-api-access-4fnws\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.771830 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-catalog-content\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.772047 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fnws\" (UniqueName: \"kubernetes.io/projected/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-kube-api-access-4fnws\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.772429 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-catalog-content\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.772483 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-utilities\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.772937 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-utilities\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.793552 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fnws\" (UniqueName: \"kubernetes.io/projected/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-kube-api-access-4fnws\") pod \"community-operators-p9f28\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:39 crc kubenswrapper[4888]: I1006 15:46:39.862019 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:40 crc kubenswrapper[4888]: I1006 15:46:40.463624 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9f28"] Oct 06 15:46:40 crc kubenswrapper[4888]: W1006 15:46:40.470070 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ff734f4_6d6a_47d8_9121_23e3a6ff3813.slice/crio-05e387961a7c5585d4e24f79d76b32e80a8e497f4ebfb793808c62365252111c WatchSource:0}: Error finding container 05e387961a7c5585d4e24f79d76b32e80a8e497f4ebfb793808c62365252111c: Status 404 returned error can't find the container with id 05e387961a7c5585d4e24f79d76b32e80a8e497f4ebfb793808c62365252111c Oct 06 15:46:40 crc kubenswrapper[4888]: I1006 15:46:40.733745 4888 generic.go:334] "Generic (PLEG): container finished" podID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerID="16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f" exitCode=0 Oct 06 15:46:40 crc kubenswrapper[4888]: I1006 15:46:40.733985 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9f28" event={"ID":"8ff734f4-6d6a-47d8-9121-23e3a6ff3813","Type":"ContainerDied","Data":"16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f"} Oct 06 15:46:40 crc kubenswrapper[4888]: I1006 15:46:40.734027 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9f28" event={"ID":"8ff734f4-6d6a-47d8-9121-23e3a6ff3813","Type":"ContainerStarted","Data":"05e387961a7c5585d4e24f79d76b32e80a8e497f4ebfb793808c62365252111c"} Oct 06 15:46:42 crc kubenswrapper[4888]: I1006 15:46:42.752695 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9f28" event={"ID":"8ff734f4-6d6a-47d8-9121-23e3a6ff3813","Type":"ContainerStarted","Data":"0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4"} Oct 06 15:46:43 crc kubenswrapper[4888]: I1006 15:46:43.692002 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:43 crc kubenswrapper[4888]: I1006 15:46:43.692282 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:43 crc kubenswrapper[4888]: I1006 15:46:43.761758 4888 generic.go:334] "Generic (PLEG): container finished" podID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerID="0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4" exitCode=0 Oct 06 15:46:43 crc kubenswrapper[4888]: I1006 15:46:43.761838 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9f28" event={"ID":"8ff734f4-6d6a-47d8-9121-23e3a6ff3813","Type":"ContainerDied","Data":"0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4"} Oct 06 15:46:44 crc kubenswrapper[4888]: I1006 15:46:44.739599 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kzpw9" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="registry-server" probeResult="failure" output=< Oct 06 15:46:44 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:46:44 crc kubenswrapper[4888]: > Oct 06 15:46:44 crc kubenswrapper[4888]: I1006 15:46:44.773518 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9f28" event={"ID":"8ff734f4-6d6a-47d8-9121-23e3a6ff3813","Type":"ContainerStarted","Data":"413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0"} Oct 06 15:46:44 crc kubenswrapper[4888]: I1006 15:46:44.801940 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9f28" podStartSLOduration=2.115108862 podStartE2EDuration="5.80191722s" podCreationTimestamp="2025-10-06 15:46:39 +0000 UTC" firstStartedPulling="2025-10-06 15:46:40.736714307 +0000 UTC m=+2740.549065025" lastFinishedPulling="2025-10-06 15:46:44.423522665 +0000 UTC m=+2744.235873383" observedRunningTime="2025-10-06 15:46:44.791483377 +0000 UTC m=+2744.603834115" watchObservedRunningTime="2025-10-06 15:46:44.80191722 +0000 UTC m=+2744.614267948" Oct 06 15:46:49 crc kubenswrapper[4888]: I1006 15:46:49.862650 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:49 crc kubenswrapper[4888]: I1006 15:46:49.864148 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:50 crc kubenswrapper[4888]: I1006 15:46:50.922006 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p9f28" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="registry-server" probeResult="failure" output=< Oct 06 15:46:50 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:46:50 crc kubenswrapper[4888]: > Oct 06 15:46:53 crc kubenswrapper[4888]: I1006 15:46:53.740438 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:53 crc kubenswrapper[4888]: I1006 15:46:53.795047 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:54 crc kubenswrapper[4888]: I1006 15:46:54.728596 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzpw9"] Oct 06 15:46:54 crc kubenswrapper[4888]: I1006 15:46:54.853821 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kzpw9" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="registry-server" containerID="cri-o://33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5" gracePeriod=2 Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.240519 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.387910 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzqfp\" (UniqueName: \"kubernetes.io/projected/1b31ea94-a2d8-484f-9b95-c474e6573924-kube-api-access-qzqfp\") pod \"1b31ea94-a2d8-484f-9b95-c474e6573924\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.388068 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-catalog-content\") pod \"1b31ea94-a2d8-484f-9b95-c474e6573924\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.388226 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-utilities\") pod \"1b31ea94-a2d8-484f-9b95-c474e6573924\" (UID: \"1b31ea94-a2d8-484f-9b95-c474e6573924\") " Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.389042 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-utilities" (OuterVolumeSpecName: "utilities") pod "1b31ea94-a2d8-484f-9b95-c474e6573924" (UID: "1b31ea94-a2d8-484f-9b95-c474e6573924"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.395116 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b31ea94-a2d8-484f-9b95-c474e6573924-kube-api-access-qzqfp" (OuterVolumeSpecName: "kube-api-access-qzqfp") pod "1b31ea94-a2d8-484f-9b95-c474e6573924" (UID: "1b31ea94-a2d8-484f-9b95-c474e6573924"). InnerVolumeSpecName "kube-api-access-qzqfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.440318 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b31ea94-a2d8-484f-9b95-c474e6573924" (UID: "1b31ea94-a2d8-484f-9b95-c474e6573924"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.490948 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.491197 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b31ea94-a2d8-484f-9b95-c474e6573924-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.491297 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzqfp\" (UniqueName: \"kubernetes.io/projected/1b31ea94-a2d8-484f-9b95-c474e6573924-kube-api-access-qzqfp\") on node \"crc\" DevicePath \"\"" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.865204 4888 generic.go:334] "Generic (PLEG): container finished" podID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerID="33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5" exitCode=0 Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.865279 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kzpw9" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.865288 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzpw9" event={"ID":"1b31ea94-a2d8-484f-9b95-c474e6573924","Type":"ContainerDied","Data":"33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5"} Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.865412 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kzpw9" event={"ID":"1b31ea94-a2d8-484f-9b95-c474e6573924","Type":"ContainerDied","Data":"e81f064b292510f85ae32121a3504a100b49c7fb6497129578abcf6ab1d1b0b9"} Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.865434 4888 scope.go:117] "RemoveContainer" containerID="33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.888478 4888 scope.go:117] "RemoveContainer" containerID="30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.906927 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kzpw9"] Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.917043 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kzpw9"] Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.932485 4888 scope.go:117] "RemoveContainer" containerID="449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.965049 4888 scope.go:117] "RemoveContainer" containerID="33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5" Oct 06 15:46:55 crc kubenswrapper[4888]: E1006 15:46:55.969938 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5\": container with ID starting with 33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5 not found: ID does not exist" containerID="33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.969987 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5"} err="failed to get container status \"33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5\": rpc error: code = NotFound desc = could not find container \"33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5\": container with ID starting with 33e64409972a584cc324562b783f9842fc6b81871915f016c99f01288cfddbb5 not found: ID does not exist" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.970018 4888 scope.go:117] "RemoveContainer" containerID="30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc" Oct 06 15:46:55 crc kubenswrapper[4888]: E1006 15:46:55.970513 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc\": container with ID starting with 30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc not found: ID does not exist" containerID="30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.970783 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc"} err="failed to get container status \"30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc\": rpc error: code = NotFound desc = could not find container \"30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc\": container with ID starting with 30c750b997c25ea68370c6de0d72ef0d66b8ceb96d86417e8ec83b9345ef1fdc not found: ID does not exist" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.970935 4888 scope.go:117] "RemoveContainer" containerID="449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34" Oct 06 15:46:55 crc kubenswrapper[4888]: E1006 15:46:55.971440 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34\": container with ID starting with 449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34 not found: ID does not exist" containerID="449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34" Oct 06 15:46:55 crc kubenswrapper[4888]: I1006 15:46:55.971514 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34"} err="failed to get container status \"449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34\": rpc error: code = NotFound desc = could not find container \"449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34\": container with ID starting with 449ef6176d42976999e3a4f1b62dafb0d341f15c0e443756bce8a8e077d94e34 not found: ID does not exist" Oct 06 15:46:56 crc kubenswrapper[4888]: I1006 15:46:56.934981 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" path="/var/lib/kubelet/pods/1b31ea94-a2d8-484f-9b95-c474e6573924/volumes" Oct 06 15:46:59 crc kubenswrapper[4888]: I1006 15:46:59.913108 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:46:59 crc kubenswrapper[4888]: I1006 15:46:59.965967 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:47:00 crc kubenswrapper[4888]: I1006 15:47:00.146646 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9f28"] Oct 06 15:47:01 crc kubenswrapper[4888]: I1006 15:47:01.920502 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9f28" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="registry-server" containerID="cri-o://413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0" gracePeriod=2 Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.346651 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.432769 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-catalog-content\") pod \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.432857 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-utilities\") pod \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.432938 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fnws\" (UniqueName: \"kubernetes.io/projected/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-kube-api-access-4fnws\") pod \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\" (UID: \"8ff734f4-6d6a-47d8-9121-23e3a6ff3813\") " Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.434946 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-utilities" (OuterVolumeSpecName: "utilities") pod "8ff734f4-6d6a-47d8-9121-23e3a6ff3813" (UID: "8ff734f4-6d6a-47d8-9121-23e3a6ff3813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.440063 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-kube-api-access-4fnws" (OuterVolumeSpecName: "kube-api-access-4fnws") pod "8ff734f4-6d6a-47d8-9121-23e3a6ff3813" (UID: "8ff734f4-6d6a-47d8-9121-23e3a6ff3813"). InnerVolumeSpecName "kube-api-access-4fnws". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.479771 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ff734f4-6d6a-47d8-9121-23e3a6ff3813" (UID: "8ff734f4-6d6a-47d8-9121-23e3a6ff3813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.534820 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.535130 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.535146 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fnws\" (UniqueName: \"kubernetes.io/projected/8ff734f4-6d6a-47d8-9121-23e3a6ff3813-kube-api-access-4fnws\") on node \"crc\" DevicePath \"\"" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.949012 4888 generic.go:334] "Generic (PLEG): container finished" podID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerID="413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0" exitCode=0 Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.949191 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9f28" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.957413 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9f28" event={"ID":"8ff734f4-6d6a-47d8-9121-23e3a6ff3813","Type":"ContainerDied","Data":"413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0"} Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.957454 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9f28" event={"ID":"8ff734f4-6d6a-47d8-9121-23e3a6ff3813","Type":"ContainerDied","Data":"05e387961a7c5585d4e24f79d76b32e80a8e497f4ebfb793808c62365252111c"} Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.957471 4888 scope.go:117] "RemoveContainer" containerID="413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.994312 4888 scope.go:117] "RemoveContainer" containerID="0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4" Oct 06 15:47:02 crc kubenswrapper[4888]: I1006 15:47:02.998876 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9f28"] Oct 06 15:47:03 crc kubenswrapper[4888]: I1006 15:47:03.017841 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9f28"] Oct 06 15:47:03 crc kubenswrapper[4888]: I1006 15:47:03.024210 4888 scope.go:117] "RemoveContainer" containerID="16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f" Oct 06 15:47:03 crc kubenswrapper[4888]: I1006 15:47:03.078577 4888 scope.go:117] "RemoveContainer" containerID="413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0" Oct 06 15:47:03 crc kubenswrapper[4888]: E1006 15:47:03.079036 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0\": container with ID starting with 413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0 not found: ID does not exist" containerID="413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0" Oct 06 15:47:03 crc kubenswrapper[4888]: I1006 15:47:03.079069 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0"} err="failed to get container status \"413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0\": rpc error: code = NotFound desc = could not find container \"413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0\": container with ID starting with 413a88d2efaad4be93723cb7485a7fe9806136ff618567bd6c2324138acd15a0 not found: ID does not exist" Oct 06 15:47:03 crc kubenswrapper[4888]: I1006 15:47:03.079097 4888 scope.go:117] "RemoveContainer" containerID="0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4" Oct 06 15:47:03 crc kubenswrapper[4888]: E1006 15:47:03.079400 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4\": container with ID starting with 0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4 not found: ID does not exist" containerID="0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4" Oct 06 15:47:03 crc kubenswrapper[4888]: I1006 15:47:03.079430 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4"} err="failed to get container status \"0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4\": rpc error: code = NotFound desc = could not find container \"0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4\": container with ID starting with 0bdd5256e898cd1b2a13482e7540b259d7bff806ec363d72215ec9d403abaef4 not found: ID does not exist" Oct 06 15:47:03 crc kubenswrapper[4888]: I1006 15:47:03.079450 4888 scope.go:117] "RemoveContainer" containerID="16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f" Oct 06 15:47:03 crc kubenswrapper[4888]: E1006 15:47:03.079984 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f\": container with ID starting with 16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f not found: ID does not exist" containerID="16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f" Oct 06 15:47:03 crc kubenswrapper[4888]: I1006 15:47:03.080012 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f"} err="failed to get container status \"16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f\": rpc error: code = NotFound desc = could not find container \"16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f\": container with ID starting with 16455cca5bebaf730ec73238ecd7d05f9735b861ec24a71983745793d27b663f not found: ID does not exist" Oct 06 15:47:04 crc kubenswrapper[4888]: I1006 15:47:04.932922 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" path="/var/lib/kubelet/pods/8ff734f4-6d6a-47d8-9121-23e3a6ff3813/volumes" Oct 06 15:48:02 crc kubenswrapper[4888]: I1006 15:48:02.563499 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:48:02 crc kubenswrapper[4888]: I1006 15:48:02.564015 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:48:32 crc kubenswrapper[4888]: I1006 15:48:32.563399 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:48:32 crc kubenswrapper[4888]: I1006 15:48:32.564011 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.563597 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.564879 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.564957 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.565994 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.566086 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" gracePeriod=600 Oct 06 15:49:02 crc kubenswrapper[4888]: E1006 15:49:02.688442 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.951661 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" exitCode=0 Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.951702 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce"} Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.951732 4888 scope.go:117] "RemoveContainer" containerID="b457172481932fb01bcb9b352e889ca6a84c3f8759b7a9a28309ef990aea6469" Oct 06 15:49:02 crc kubenswrapper[4888]: I1006 15:49:02.952332 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:49:02 crc kubenswrapper[4888]: E1006 15:49:02.952551 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:49:14 crc kubenswrapper[4888]: I1006 15:49:14.921872 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:49:14 crc kubenswrapper[4888]: E1006 15:49:14.922775 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:49:27 crc kubenswrapper[4888]: I1006 15:49:27.923247 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:49:27 crc kubenswrapper[4888]: E1006 15:49:27.924054 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:49:41 crc kubenswrapper[4888]: I1006 15:49:41.921920 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:49:41 crc kubenswrapper[4888]: E1006 15:49:41.926768 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:49:56 crc kubenswrapper[4888]: I1006 15:49:56.921303 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:49:56 crc kubenswrapper[4888]: E1006 15:49:56.922234 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:50:10 crc kubenswrapper[4888]: I1006 15:50:10.931889 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:50:10 crc kubenswrapper[4888]: E1006 15:50:10.932683 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:50:22 crc kubenswrapper[4888]: I1006 15:50:22.922994 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:50:22 crc kubenswrapper[4888]: E1006 15:50:22.925984 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:50:34 crc kubenswrapper[4888]: I1006 15:50:34.921330 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:50:34 crc kubenswrapper[4888]: E1006 15:50:34.922430 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:50:45 crc kubenswrapper[4888]: I1006 15:50:45.922791 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:50:45 crc kubenswrapper[4888]: E1006 15:50:45.924080 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.419421 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kb5c6"] Oct 06 15:50:54 crc kubenswrapper[4888]: E1006 15:50:54.421659 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="extract-utilities" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.421739 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="extract-utilities" Oct 06 15:50:54 crc kubenswrapper[4888]: E1006 15:50:54.421833 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="registry-server" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.421933 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="registry-server" Oct 06 15:50:54 crc kubenswrapper[4888]: E1006 15:50:54.422002 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="extract-utilities" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.422057 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="extract-utilities" Oct 06 15:50:54 crc kubenswrapper[4888]: E1006 15:50:54.422132 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="registry-server" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.422183 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="registry-server" Oct 06 15:50:54 crc kubenswrapper[4888]: E1006 15:50:54.422239 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="extract-content" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.422292 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="extract-content" Oct 06 15:50:54 crc kubenswrapper[4888]: E1006 15:50:54.422349 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="extract-content" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.422438 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="extract-content" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.422682 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b31ea94-a2d8-484f-9b95-c474e6573924" containerName="registry-server" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.422746 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff734f4-6d6a-47d8-9121-23e3a6ff3813" containerName="registry-server" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.424130 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.433397 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb5c6"] Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.514529 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv64m\" (UniqueName: \"kubernetes.io/projected/efa86518-0b19-4e08-b2c1-753e48f2ec07-kube-api-access-rv64m\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.514580 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-utilities\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.514658 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-catalog-content\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.615960 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv64m\" (UniqueName: \"kubernetes.io/projected/efa86518-0b19-4e08-b2c1-753e48f2ec07-kube-api-access-rv64m\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.616018 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-utilities\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.616114 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-catalog-content\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.616590 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-catalog-content\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.617291 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-utilities\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.647212 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv64m\" (UniqueName: \"kubernetes.io/projected/efa86518-0b19-4e08-b2c1-753e48f2ec07-kube-api-access-rv64m\") pod \"redhat-marketplace-kb5c6\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:54 crc kubenswrapper[4888]: I1006 15:50:54.783432 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:50:55 crc kubenswrapper[4888]: I1006 15:50:55.227850 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb5c6"] Oct 06 15:50:55 crc kubenswrapper[4888]: I1006 15:50:55.905478 4888 generic.go:334] "Generic (PLEG): container finished" podID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerID="86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456" exitCode=0 Oct 06 15:50:55 crc kubenswrapper[4888]: I1006 15:50:55.905707 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb5c6" event={"ID":"efa86518-0b19-4e08-b2c1-753e48f2ec07","Type":"ContainerDied","Data":"86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456"} Oct 06 15:50:55 crc kubenswrapper[4888]: I1006 15:50:55.906995 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb5c6" event={"ID":"efa86518-0b19-4e08-b2c1-753e48f2ec07","Type":"ContainerStarted","Data":"3edc1ea35bdb87bf4634a55ce9ef0044217441768dd18bb67e25a2a73828027f"} Oct 06 15:50:57 crc kubenswrapper[4888]: I1006 15:50:57.924711 4888 generic.go:334] "Generic (PLEG): container finished" podID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerID="5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba" exitCode=0 Oct 06 15:50:57 crc kubenswrapper[4888]: I1006 15:50:57.924919 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb5c6" event={"ID":"efa86518-0b19-4e08-b2c1-753e48f2ec07","Type":"ContainerDied","Data":"5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba"} Oct 06 15:50:58 crc kubenswrapper[4888]: I1006 15:50:58.935618 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb5c6" event={"ID":"efa86518-0b19-4e08-b2c1-753e48f2ec07","Type":"ContainerStarted","Data":"e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab"} Oct 06 15:50:58 crc kubenswrapper[4888]: I1006 15:50:58.956604 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kb5c6" podStartSLOduration=2.370653105 podStartE2EDuration="4.956586661s" podCreationTimestamp="2025-10-06 15:50:54 +0000 UTC" firstStartedPulling="2025-10-06 15:50:55.907595229 +0000 UTC m=+2995.719945947" lastFinishedPulling="2025-10-06 15:50:58.493528785 +0000 UTC m=+2998.305879503" observedRunningTime="2025-10-06 15:50:58.950607575 +0000 UTC m=+2998.762958313" watchObservedRunningTime="2025-10-06 15:50:58.956586661 +0000 UTC m=+2998.768937369" Oct 06 15:50:59 crc kubenswrapper[4888]: I1006 15:50:59.921429 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:50:59 crc kubenswrapper[4888]: E1006 15:50:59.921645 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:51:04 crc kubenswrapper[4888]: I1006 15:51:04.784762 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:51:04 crc kubenswrapper[4888]: I1006 15:51:04.786746 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:51:04 crc kubenswrapper[4888]: I1006 15:51:04.837729 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:51:05 crc kubenswrapper[4888]: I1006 15:51:05.062339 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:51:05 crc kubenswrapper[4888]: I1006 15:51:05.118130 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb5c6"] Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.000309 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kb5c6" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerName="registry-server" containerID="cri-o://e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab" gracePeriod=2 Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.504248 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.562831 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-utilities\") pod \"efa86518-0b19-4e08-b2c1-753e48f2ec07\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.563049 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-catalog-content\") pod \"efa86518-0b19-4e08-b2c1-753e48f2ec07\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.563135 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv64m\" (UniqueName: \"kubernetes.io/projected/efa86518-0b19-4e08-b2c1-753e48f2ec07-kube-api-access-rv64m\") pod \"efa86518-0b19-4e08-b2c1-753e48f2ec07\" (UID: \"efa86518-0b19-4e08-b2c1-753e48f2ec07\") " Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.563880 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-utilities" (OuterVolumeSpecName: "utilities") pod "efa86518-0b19-4e08-b2c1-753e48f2ec07" (UID: "efa86518-0b19-4e08-b2c1-753e48f2ec07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.571944 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa86518-0b19-4e08-b2c1-753e48f2ec07-kube-api-access-rv64m" (OuterVolumeSpecName: "kube-api-access-rv64m") pod "efa86518-0b19-4e08-b2c1-753e48f2ec07" (UID: "efa86518-0b19-4e08-b2c1-753e48f2ec07"). InnerVolumeSpecName "kube-api-access-rv64m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.578672 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efa86518-0b19-4e08-b2c1-753e48f2ec07" (UID: "efa86518-0b19-4e08-b2c1-753e48f2ec07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.664222 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.664465 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv64m\" (UniqueName: \"kubernetes.io/projected/efa86518-0b19-4e08-b2c1-753e48f2ec07-kube-api-access-rv64m\") on node \"crc\" DevicePath \"\"" Oct 06 15:51:07 crc kubenswrapper[4888]: I1006 15:51:07.664531 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efa86518-0b19-4e08-b2c1-753e48f2ec07-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.015184 4888 generic.go:334] "Generic (PLEG): container finished" podID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerID="e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab" exitCode=0 Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.015237 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb5c6" event={"ID":"efa86518-0b19-4e08-b2c1-753e48f2ec07","Type":"ContainerDied","Data":"e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab"} Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.015272 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kb5c6" event={"ID":"efa86518-0b19-4e08-b2c1-753e48f2ec07","Type":"ContainerDied","Data":"3edc1ea35bdb87bf4634a55ce9ef0044217441768dd18bb67e25a2a73828027f"} Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.015293 4888 scope.go:117] "RemoveContainer" containerID="e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.015504 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kb5c6" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.046379 4888 scope.go:117] "RemoveContainer" containerID="5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.052170 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb5c6"] Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.074822 4888 scope.go:117] "RemoveContainer" containerID="86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.086526 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kb5c6"] Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.109967 4888 scope.go:117] "RemoveContainer" containerID="e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab" Oct 06 15:51:08 crc kubenswrapper[4888]: E1006 15:51:08.110455 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab\": container with ID starting with e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab not found: ID does not exist" containerID="e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.110503 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab"} err="failed to get container status \"e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab\": rpc error: code = NotFound desc = could not find container \"e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab\": container with ID starting with e0041dec6e9c4f01741025a03afc6ec35055cfd8b0d11c3671c25858129e7eab not found: ID does not exist" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.110536 4888 scope.go:117] "RemoveContainer" containerID="5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba" Oct 06 15:51:08 crc kubenswrapper[4888]: E1006 15:51:08.110840 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba\": container with ID starting with 5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba not found: ID does not exist" containerID="5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.110878 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba"} err="failed to get container status \"5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba\": rpc error: code = NotFound desc = could not find container \"5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba\": container with ID starting with 5f0a2572e4d8420937fde90b76d745fc22ae1cd9eae2dce20c644a3d66ea76ba not found: ID does not exist" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.110914 4888 scope.go:117] "RemoveContainer" containerID="86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456" Oct 06 15:51:08 crc kubenswrapper[4888]: E1006 15:51:08.111181 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456\": container with ID starting with 86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456 not found: ID does not exist" containerID="86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.111213 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456"} err="failed to get container status \"86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456\": rpc error: code = NotFound desc = could not find container \"86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456\": container with ID starting with 86debdc1634df4650e294eb6495d50af7527183facc52a61164e0acdb908c456 not found: ID does not exist" Oct 06 15:51:08 crc kubenswrapper[4888]: I1006 15:51:08.932188 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" path="/var/lib/kubelet/pods/efa86518-0b19-4e08-b2c1-753e48f2ec07/volumes" Oct 06 15:51:10 crc kubenswrapper[4888]: I1006 15:51:10.928692 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:51:10 crc kubenswrapper[4888]: E1006 15:51:10.929579 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:51:22 crc kubenswrapper[4888]: I1006 15:51:22.923812 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:51:22 crc kubenswrapper[4888]: E1006 15:51:22.924647 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:51:34 crc kubenswrapper[4888]: I1006 15:51:34.922515 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:51:34 crc kubenswrapper[4888]: E1006 15:51:34.923320 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:51:47 crc kubenswrapper[4888]: I1006 15:51:47.922348 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:51:47 crc kubenswrapper[4888]: E1006 15:51:47.923291 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.095495 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mcdwz"] Oct 06 15:51:50 crc kubenswrapper[4888]: E1006 15:51:50.096253 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerName="extract-utilities" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.096270 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerName="extract-utilities" Oct 06 15:51:50 crc kubenswrapper[4888]: E1006 15:51:50.096299 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerName="registry-server" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.096307 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerName="registry-server" Oct 06 15:51:50 crc kubenswrapper[4888]: E1006 15:51:50.096352 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerName="extract-content" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.096360 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerName="extract-content" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.096582 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa86518-0b19-4e08-b2c1-753e48f2ec07" containerName="registry-server" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.098279 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.112263 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mcdwz"] Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.195470 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78lxq\" (UniqueName: \"kubernetes.io/projected/03e04cb0-27da-473b-91bb-ed9c2e64524d-kube-api-access-78lxq\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.195672 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-catalog-content\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.195813 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-utilities\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.298055 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-catalog-content\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.298133 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-utilities\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.298275 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78lxq\" (UniqueName: \"kubernetes.io/projected/03e04cb0-27da-473b-91bb-ed9c2e64524d-kube-api-access-78lxq\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.298974 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-catalog-content\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.299059 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-utilities\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.319204 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78lxq\" (UniqueName: \"kubernetes.io/projected/03e04cb0-27da-473b-91bb-ed9c2e64524d-kube-api-access-78lxq\") pod \"redhat-operators-mcdwz\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:50 crc kubenswrapper[4888]: I1006 15:51:50.429250 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:51:51 crc kubenswrapper[4888]: I1006 15:51:51.036864 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mcdwz"] Oct 06 15:51:51 crc kubenswrapper[4888]: I1006 15:51:51.388826 4888 generic.go:334] "Generic (PLEG): container finished" podID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerID="b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00" exitCode=0 Oct 06 15:51:51 crc kubenswrapper[4888]: I1006 15:51:51.388914 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcdwz" event={"ID":"03e04cb0-27da-473b-91bb-ed9c2e64524d","Type":"ContainerDied","Data":"b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00"} Oct 06 15:51:51 crc kubenswrapper[4888]: I1006 15:51:51.389154 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcdwz" event={"ID":"03e04cb0-27da-473b-91bb-ed9c2e64524d","Type":"ContainerStarted","Data":"e9925397d7e556f16258529414bb6802cef9c90981f208ef03eeaf46cab32edf"} Oct 06 15:51:51 crc kubenswrapper[4888]: I1006 15:51:51.445934 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:51:53 crc kubenswrapper[4888]: I1006 15:51:53.404299 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcdwz" event={"ID":"03e04cb0-27da-473b-91bb-ed9c2e64524d","Type":"ContainerStarted","Data":"3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd"} Oct 06 15:51:56 crc kubenswrapper[4888]: I1006 15:51:56.439435 4888 generic.go:334] "Generic (PLEG): container finished" podID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerID="3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd" exitCode=0 Oct 06 15:51:56 crc kubenswrapper[4888]: I1006 15:51:56.439575 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcdwz" event={"ID":"03e04cb0-27da-473b-91bb-ed9c2e64524d","Type":"ContainerDied","Data":"3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd"} Oct 06 15:51:57 crc kubenswrapper[4888]: I1006 15:51:57.450965 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcdwz" event={"ID":"03e04cb0-27da-473b-91bb-ed9c2e64524d","Type":"ContainerStarted","Data":"7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680"} Oct 06 15:51:57 crc kubenswrapper[4888]: I1006 15:51:57.472579 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mcdwz" podStartSLOduration=1.939991713 podStartE2EDuration="7.472554394s" podCreationTimestamp="2025-10-06 15:51:50 +0000 UTC" firstStartedPulling="2025-10-06 15:51:51.44559698 +0000 UTC m=+3051.257947698" lastFinishedPulling="2025-10-06 15:51:56.978159661 +0000 UTC m=+3056.790510379" observedRunningTime="2025-10-06 15:51:57.470986295 +0000 UTC m=+3057.283337013" watchObservedRunningTime="2025-10-06 15:51:57.472554394 +0000 UTC m=+3057.284905142" Oct 06 15:52:00 crc kubenswrapper[4888]: I1006 15:52:00.430089 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:52:00 crc kubenswrapper[4888]: I1006 15:52:00.430408 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:52:00 crc kubenswrapper[4888]: I1006 15:52:00.932693 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:52:00 crc kubenswrapper[4888]: E1006 15:52:00.933280 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:52:01 crc kubenswrapper[4888]: I1006 15:52:01.500595 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mcdwz" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="registry-server" probeResult="failure" output=< Oct 06 15:52:01 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 15:52:01 crc kubenswrapper[4888]: > Oct 06 15:52:10 crc kubenswrapper[4888]: I1006 15:52:10.476101 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:52:10 crc kubenswrapper[4888]: I1006 15:52:10.541903 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:52:10 crc kubenswrapper[4888]: I1006 15:52:10.716787 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mcdwz"] Oct 06 15:52:11 crc kubenswrapper[4888]: I1006 15:52:11.567158 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mcdwz" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="registry-server" containerID="cri-o://7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680" gracePeriod=2 Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.049973 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.138674 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-utilities\") pod \"03e04cb0-27da-473b-91bb-ed9c2e64524d\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.138881 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78lxq\" (UniqueName: \"kubernetes.io/projected/03e04cb0-27da-473b-91bb-ed9c2e64524d-kube-api-access-78lxq\") pod \"03e04cb0-27da-473b-91bb-ed9c2e64524d\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.138933 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-catalog-content\") pod \"03e04cb0-27da-473b-91bb-ed9c2e64524d\" (UID: \"03e04cb0-27da-473b-91bb-ed9c2e64524d\") " Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.140826 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-utilities" (OuterVolumeSpecName: "utilities") pod "03e04cb0-27da-473b-91bb-ed9c2e64524d" (UID: "03e04cb0-27da-473b-91bb-ed9c2e64524d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.161853 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e04cb0-27da-473b-91bb-ed9c2e64524d-kube-api-access-78lxq" (OuterVolumeSpecName: "kube-api-access-78lxq") pod "03e04cb0-27da-473b-91bb-ed9c2e64524d" (UID: "03e04cb0-27da-473b-91bb-ed9c2e64524d"). InnerVolumeSpecName "kube-api-access-78lxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.230004 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03e04cb0-27da-473b-91bb-ed9c2e64524d" (UID: "03e04cb0-27da-473b-91bb-ed9c2e64524d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.242155 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78lxq\" (UniqueName: \"kubernetes.io/projected/03e04cb0-27da-473b-91bb-ed9c2e64524d-kube-api-access-78lxq\") on node \"crc\" DevicePath \"\"" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.242239 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.242272 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e04cb0-27da-473b-91bb-ed9c2e64524d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.578063 4888 generic.go:334] "Generic (PLEG): container finished" podID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerID="7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680" exitCode=0 Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.578105 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcdwz" event={"ID":"03e04cb0-27da-473b-91bb-ed9c2e64524d","Type":"ContainerDied","Data":"7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680"} Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.578128 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcdwz" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.578150 4888 scope.go:117] "RemoveContainer" containerID="7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.578137 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcdwz" event={"ID":"03e04cb0-27da-473b-91bb-ed9c2e64524d","Type":"ContainerDied","Data":"e9925397d7e556f16258529414bb6802cef9c90981f208ef03eeaf46cab32edf"} Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.598497 4888 scope.go:117] "RemoveContainer" containerID="3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.633584 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mcdwz"] Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.637720 4888 scope.go:117] "RemoveContainer" containerID="b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.646080 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mcdwz"] Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.669983 4888 scope.go:117] "RemoveContainer" containerID="7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680" Oct 06 15:52:12 crc kubenswrapper[4888]: E1006 15:52:12.670450 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680\": container with ID starting with 7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680 not found: ID does not exist" containerID="7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.670484 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680"} err="failed to get container status \"7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680\": rpc error: code = NotFound desc = could not find container \"7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680\": container with ID starting with 7217d3de9ee2ff969b1ca1ef6d699e95dee0df0aabb3cec82665f203dedc7680 not found: ID does not exist" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.670507 4888 scope.go:117] "RemoveContainer" containerID="3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd" Oct 06 15:52:12 crc kubenswrapper[4888]: E1006 15:52:12.670764 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd\": container with ID starting with 3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd not found: ID does not exist" containerID="3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.670785 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd"} err="failed to get container status \"3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd\": rpc error: code = NotFound desc = could not find container \"3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd\": container with ID starting with 3eeb6711cec829d93773c61970d6822dd3f5e0764bcfc99a4f8e94b5ccc447fd not found: ID does not exist" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.670864 4888 scope.go:117] "RemoveContainer" containerID="b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00" Oct 06 15:52:12 crc kubenswrapper[4888]: E1006 15:52:12.671186 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00\": container with ID starting with b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00 not found: ID does not exist" containerID="b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.671211 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00"} err="failed to get container status \"b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00\": rpc error: code = NotFound desc = could not find container \"b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00\": container with ID starting with b5bb7b7f46e213c14a6362c3267937f846620cfb62c0e070a35e8d1f6c2c7e00 not found: ID does not exist" Oct 06 15:52:12 crc kubenswrapper[4888]: I1006 15:52:12.937725 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" path="/var/lib/kubelet/pods/03e04cb0-27da-473b-91bb-ed9c2e64524d/volumes" Oct 06 15:52:13 crc kubenswrapper[4888]: I1006 15:52:13.921737 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:52:13 crc kubenswrapper[4888]: E1006 15:52:13.922125 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:52:28 crc kubenswrapper[4888]: I1006 15:52:28.921224 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:52:28 crc kubenswrapper[4888]: E1006 15:52:28.921929 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:52:40 crc kubenswrapper[4888]: I1006 15:52:40.926247 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:52:40 crc kubenswrapper[4888]: E1006 15:52:40.927088 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:52:55 crc kubenswrapper[4888]: I1006 15:52:55.921614 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:52:55 crc kubenswrapper[4888]: E1006 15:52:55.922378 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:53:09 crc kubenswrapper[4888]: I1006 15:53:09.921698 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:53:09 crc kubenswrapper[4888]: E1006 15:53:09.922559 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:53:23 crc kubenswrapper[4888]: I1006 15:53:23.922338 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:53:23 crc kubenswrapper[4888]: E1006 15:53:23.923527 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:53:36 crc kubenswrapper[4888]: I1006 15:53:36.926396 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:53:36 crc kubenswrapper[4888]: E1006 15:53:36.927328 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:53:47 crc kubenswrapper[4888]: I1006 15:53:47.921953 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:53:47 crc kubenswrapper[4888]: E1006 15:53:47.922844 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:54:01 crc kubenswrapper[4888]: I1006 15:54:01.922062 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:54:01 crc kubenswrapper[4888]: E1006 15:54:01.922754 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 15:54:15 crc kubenswrapper[4888]: I1006 15:54:15.921287 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:54:16 crc kubenswrapper[4888]: I1006 15:54:16.651996 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"c496c684f1457104a7366ebe69cbaf0b18059bb0775845d157706bb46b452d83"} Oct 06 15:56:32 crc kubenswrapper[4888]: I1006 15:56:32.563341 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:56:32 crc kubenswrapper[4888]: I1006 15:56:32.564143 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.241076 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7dhch"] Oct 06 15:56:57 crc kubenswrapper[4888]: E1006 15:56:57.242037 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="extract-utilities" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.242052 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="extract-utilities" Oct 06 15:56:57 crc kubenswrapper[4888]: E1006 15:56:57.242063 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="registry-server" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.242071 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="registry-server" Oct 06 15:56:57 crc kubenswrapper[4888]: E1006 15:56:57.242111 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="extract-content" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.242117 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="extract-content" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.242289 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e04cb0-27da-473b-91bb-ed9c2e64524d" containerName="registry-server" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.243746 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.260325 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dhch"] Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.368520 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-utilities\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.368820 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-catalog-content\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.368972 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gxn8\" (UniqueName: \"kubernetes.io/projected/facba4df-743e-4be2-8744-0bd67214dbc2-kube-api-access-9gxn8\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.471201 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-catalog-content\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.471297 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gxn8\" (UniqueName: \"kubernetes.io/projected/facba4df-743e-4be2-8744-0bd67214dbc2-kube-api-access-9gxn8\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.471413 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-utilities\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.472098 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-utilities\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.472382 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-catalog-content\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.493770 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gxn8\" (UniqueName: \"kubernetes.io/projected/facba4df-743e-4be2-8744-0bd67214dbc2-kube-api-access-9gxn8\") pod \"community-operators-7dhch\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:57 crc kubenswrapper[4888]: I1006 15:56:57.561500 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:56:58 crc kubenswrapper[4888]: I1006 15:56:58.107869 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7dhch"] Oct 06 15:56:59 crc kubenswrapper[4888]: I1006 15:56:59.028172 4888 generic.go:334] "Generic (PLEG): container finished" podID="facba4df-743e-4be2-8744-0bd67214dbc2" containerID="5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f" exitCode=0 Oct 06 15:56:59 crc kubenswrapper[4888]: I1006 15:56:59.028503 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dhch" event={"ID":"facba4df-743e-4be2-8744-0bd67214dbc2","Type":"ContainerDied","Data":"5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f"} Oct 06 15:56:59 crc kubenswrapper[4888]: I1006 15:56:59.028556 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dhch" event={"ID":"facba4df-743e-4be2-8744-0bd67214dbc2","Type":"ContainerStarted","Data":"340c8eba50292f65f78b2d37bda9c59186ac8e853ec7777478a577e9ee5d1e0a"} Oct 06 15:56:59 crc kubenswrapper[4888]: I1006 15:56:59.031312 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 15:57:01 crc kubenswrapper[4888]: I1006 15:57:01.045423 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dhch" event={"ID":"facba4df-743e-4be2-8744-0bd67214dbc2","Type":"ContainerStarted","Data":"243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1"} Oct 06 15:57:02 crc kubenswrapper[4888]: I1006 15:57:02.055128 4888 generic.go:334] "Generic (PLEG): container finished" podID="facba4df-743e-4be2-8744-0bd67214dbc2" containerID="243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1" exitCode=0 Oct 06 15:57:02 crc kubenswrapper[4888]: I1006 15:57:02.055186 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dhch" event={"ID":"facba4df-743e-4be2-8744-0bd67214dbc2","Type":"ContainerDied","Data":"243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1"} Oct 06 15:57:02 crc kubenswrapper[4888]: I1006 15:57:02.563244 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:57:02 crc kubenswrapper[4888]: I1006 15:57:02.563294 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:57:03 crc kubenswrapper[4888]: I1006 15:57:03.065102 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dhch" event={"ID":"facba4df-743e-4be2-8744-0bd67214dbc2","Type":"ContainerStarted","Data":"4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6"} Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.737716 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7dhch" podStartSLOduration=6.259986131 podStartE2EDuration="9.737696787s" podCreationTimestamp="2025-10-06 15:56:57 +0000 UTC" firstStartedPulling="2025-10-06 15:56:59.031005857 +0000 UTC m=+3358.843356575" lastFinishedPulling="2025-10-06 15:57:02.508716503 +0000 UTC m=+3362.321067231" observedRunningTime="2025-10-06 15:57:03.089040735 +0000 UTC m=+3362.901391453" watchObservedRunningTime="2025-10-06 15:57:06.737696787 +0000 UTC m=+3366.550047505" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.746193 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t9z7k"] Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.748077 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.762587 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9z7k"] Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.872241 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-catalog-content\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.872392 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-utilities\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.872420 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2kbv\" (UniqueName: \"kubernetes.io/projected/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-kube-api-access-m2kbv\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.973837 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-utilities\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.973880 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2kbv\" (UniqueName: \"kubernetes.io/projected/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-kube-api-access-m2kbv\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.973946 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-catalog-content\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.974275 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-utilities\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.975253 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-catalog-content\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:06 crc kubenswrapper[4888]: I1006 15:57:06.997727 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2kbv\" (UniqueName: \"kubernetes.io/projected/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-kube-api-access-m2kbv\") pod \"certified-operators-t9z7k\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:07 crc kubenswrapper[4888]: I1006 15:57:07.140444 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:07 crc kubenswrapper[4888]: I1006 15:57:07.562141 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:57:07 crc kubenswrapper[4888]: I1006 15:57:07.562422 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:57:07 crc kubenswrapper[4888]: I1006 15:57:07.622523 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:57:07 crc kubenswrapper[4888]: I1006 15:57:07.699010 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9z7k"] Oct 06 15:57:08 crc kubenswrapper[4888]: I1006 15:57:08.109681 4888 generic.go:334] "Generic (PLEG): container finished" podID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerID="4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b" exitCode=0 Oct 06 15:57:08 crc kubenswrapper[4888]: I1006 15:57:08.111761 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9z7k" event={"ID":"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8","Type":"ContainerDied","Data":"4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b"} Oct 06 15:57:08 crc kubenswrapper[4888]: I1006 15:57:08.111814 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9z7k" event={"ID":"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8","Type":"ContainerStarted","Data":"e861ce0b9b084575436f72fa672d12cbd7d33077218e9c20d573edb3d20f3659"} Oct 06 15:57:08 crc kubenswrapper[4888]: I1006 15:57:08.162055 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:57:09 crc kubenswrapper[4888]: I1006 15:57:09.122532 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9z7k" event={"ID":"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8","Type":"ContainerStarted","Data":"7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14"} Oct 06 15:57:09 crc kubenswrapper[4888]: I1006 15:57:09.920695 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dhch"] Oct 06 15:57:10 crc kubenswrapper[4888]: I1006 15:57:10.132273 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7dhch" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" containerName="registry-server" containerID="cri-o://4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6" gracePeriod=2 Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.070420 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.141338 4888 generic.go:334] "Generic (PLEG): container finished" podID="facba4df-743e-4be2-8744-0bd67214dbc2" containerID="4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6" exitCode=0 Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.141388 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7dhch" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.141429 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dhch" event={"ID":"facba4df-743e-4be2-8744-0bd67214dbc2","Type":"ContainerDied","Data":"4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6"} Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.142776 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7dhch" event={"ID":"facba4df-743e-4be2-8744-0bd67214dbc2","Type":"ContainerDied","Data":"340c8eba50292f65f78b2d37bda9c59186ac8e853ec7777478a577e9ee5d1e0a"} Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.142826 4888 scope.go:117] "RemoveContainer" containerID="4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.145267 4888 generic.go:334] "Generic (PLEG): container finished" podID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerID="7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14" exitCode=0 Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.145312 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9z7k" event={"ID":"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8","Type":"ContainerDied","Data":"7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14"} Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.162181 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-utilities\") pod \"facba4df-743e-4be2-8744-0bd67214dbc2\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.162296 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-catalog-content\") pod \"facba4df-743e-4be2-8744-0bd67214dbc2\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.162333 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gxn8\" (UniqueName: \"kubernetes.io/projected/facba4df-743e-4be2-8744-0bd67214dbc2-kube-api-access-9gxn8\") pod \"facba4df-743e-4be2-8744-0bd67214dbc2\" (UID: \"facba4df-743e-4be2-8744-0bd67214dbc2\") " Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.162975 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-utilities" (OuterVolumeSpecName: "utilities") pod "facba4df-743e-4be2-8744-0bd67214dbc2" (UID: "facba4df-743e-4be2-8744-0bd67214dbc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.166837 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.169979 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/facba4df-743e-4be2-8744-0bd67214dbc2-kube-api-access-9gxn8" (OuterVolumeSpecName: "kube-api-access-9gxn8") pod "facba4df-743e-4be2-8744-0bd67214dbc2" (UID: "facba4df-743e-4be2-8744-0bd67214dbc2"). InnerVolumeSpecName "kube-api-access-9gxn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.170268 4888 scope.go:117] "RemoveContainer" containerID="243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.213732 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "facba4df-743e-4be2-8744-0bd67214dbc2" (UID: "facba4df-743e-4be2-8744-0bd67214dbc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.223469 4888 scope.go:117] "RemoveContainer" containerID="5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.266684 4888 scope.go:117] "RemoveContainer" containerID="4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6" Oct 06 15:57:11 crc kubenswrapper[4888]: E1006 15:57:11.268007 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6\": container with ID starting with 4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6 not found: ID does not exist" containerID="4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.268060 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6"} err="failed to get container status \"4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6\": rpc error: code = NotFound desc = could not find container \"4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6\": container with ID starting with 4577581fb31b03a2f13e90c6d2ae5b9cb0f5431648f2f0328baefe1eedc232f6 not found: ID does not exist" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.268135 4888 scope.go:117] "RemoveContainer" containerID="243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1" Oct 06 15:57:11 crc kubenswrapper[4888]: E1006 15:57:11.268419 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1\": container with ID starting with 243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1 not found: ID does not exist" containerID="243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.268457 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1"} err="failed to get container status \"243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1\": rpc error: code = NotFound desc = could not find container \"243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1\": container with ID starting with 243b5034a93689860d0b8af7e51e0afdb0f0af2ad92ef38bb5cfe37a5fd80fc1 not found: ID does not exist" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.268477 4888 scope.go:117] "RemoveContainer" containerID="5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.268932 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/facba4df-743e-4be2-8744-0bd67214dbc2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.268978 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gxn8\" (UniqueName: \"kubernetes.io/projected/facba4df-743e-4be2-8744-0bd67214dbc2-kube-api-access-9gxn8\") on node \"crc\" DevicePath \"\"" Oct 06 15:57:11 crc kubenswrapper[4888]: E1006 15:57:11.269081 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f\": container with ID starting with 5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f not found: ID does not exist" containerID="5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.269127 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f"} err="failed to get container status \"5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f\": rpc error: code = NotFound desc = could not find container \"5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f\": container with ID starting with 5b9c7c0eec77c50e72c3d0420fb923a4dee7dfac48f03204f76ae8bb3ba9f95f not found: ID does not exist" Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.476627 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7dhch"] Oct 06 15:57:11 crc kubenswrapper[4888]: I1006 15:57:11.483083 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7dhch"] Oct 06 15:57:12 crc kubenswrapper[4888]: I1006 15:57:12.159580 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9z7k" event={"ID":"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8","Type":"ContainerStarted","Data":"fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5"} Oct 06 15:57:12 crc kubenswrapper[4888]: I1006 15:57:12.180691 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t9z7k" podStartSLOduration=2.572403134 podStartE2EDuration="6.180667043s" podCreationTimestamp="2025-10-06 15:57:06 +0000 UTC" firstStartedPulling="2025-10-06 15:57:08.117176823 +0000 UTC m=+3367.929527541" lastFinishedPulling="2025-10-06 15:57:11.725440722 +0000 UTC m=+3371.537791450" observedRunningTime="2025-10-06 15:57:12.176891014 +0000 UTC m=+3371.989241732" watchObservedRunningTime="2025-10-06 15:57:12.180667043 +0000 UTC m=+3371.993017781" Oct 06 15:57:12 crc kubenswrapper[4888]: I1006 15:57:12.934263 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" path="/var/lib/kubelet/pods/facba4df-743e-4be2-8744-0bd67214dbc2/volumes" Oct 06 15:57:17 crc kubenswrapper[4888]: I1006 15:57:17.140914 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:17 crc kubenswrapper[4888]: I1006 15:57:17.141212 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:17 crc kubenswrapper[4888]: I1006 15:57:17.193928 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:17 crc kubenswrapper[4888]: I1006 15:57:17.263047 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:17 crc kubenswrapper[4888]: I1006 15:57:17.431435 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9z7k"] Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.218446 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t9z7k" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerName="registry-server" containerID="cri-o://fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5" gracePeriod=2 Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.664971 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.726621 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2kbv\" (UniqueName: \"kubernetes.io/projected/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-kube-api-access-m2kbv\") pod \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.726662 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-catalog-content\") pod \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.726686 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-utilities\") pod \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\" (UID: \"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8\") " Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.727725 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-utilities" (OuterVolumeSpecName: "utilities") pod "706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" (UID: "706152cc-8cbe-4ffb-b03a-9d7bd0c264e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.741050 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-kube-api-access-m2kbv" (OuterVolumeSpecName: "kube-api-access-m2kbv") pod "706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" (UID: "706152cc-8cbe-4ffb-b03a-9d7bd0c264e8"). InnerVolumeSpecName "kube-api-access-m2kbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.780451 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" (UID: "706152cc-8cbe-4ffb-b03a-9d7bd0c264e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.829287 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2kbv\" (UniqueName: \"kubernetes.io/projected/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-kube-api-access-m2kbv\") on node \"crc\" DevicePath \"\"" Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.829316 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 15:57:19 crc kubenswrapper[4888]: I1006 15:57:19.829326 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.228316 4888 generic.go:334] "Generic (PLEG): container finished" podID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerID="fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5" exitCode=0 Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.228410 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9z7k" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.228436 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9z7k" event={"ID":"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8","Type":"ContainerDied","Data":"fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5"} Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.229918 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9z7k" event={"ID":"706152cc-8cbe-4ffb-b03a-9d7bd0c264e8","Type":"ContainerDied","Data":"e861ce0b9b084575436f72fa672d12cbd7d33077218e9c20d573edb3d20f3659"} Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.230013 4888 scope.go:117] "RemoveContainer" containerID="fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.251778 4888 scope.go:117] "RemoveContainer" containerID="7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.267889 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9z7k"] Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.276716 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t9z7k"] Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.279840 4888 scope.go:117] "RemoveContainer" containerID="4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.320093 4888 scope.go:117] "RemoveContainer" containerID="fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5" Oct 06 15:57:20 crc kubenswrapper[4888]: E1006 15:57:20.320672 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5\": container with ID starting with fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5 not found: ID does not exist" containerID="fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.320708 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5"} err="failed to get container status \"fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5\": rpc error: code = NotFound desc = could not find container \"fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5\": container with ID starting with fe61aad9bc4e6801f421ed2163683580def6a0cd1f0927a71ac678a01513bca5 not found: ID does not exist" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.320735 4888 scope.go:117] "RemoveContainer" containerID="7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14" Oct 06 15:57:20 crc kubenswrapper[4888]: E1006 15:57:20.321097 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14\": container with ID starting with 7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14 not found: ID does not exist" containerID="7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.321124 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14"} err="failed to get container status \"7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14\": rpc error: code = NotFound desc = could not find container \"7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14\": container with ID starting with 7abdc0c0582424f9c281c567e2df599ee3ca6e23f530cf9b4c6c648103521e14 not found: ID does not exist" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.321142 4888 scope.go:117] "RemoveContainer" containerID="4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b" Oct 06 15:57:20 crc kubenswrapper[4888]: E1006 15:57:20.321392 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b\": container with ID starting with 4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b not found: ID does not exist" containerID="4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.321497 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b"} err="failed to get container status \"4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b\": rpc error: code = NotFound desc = could not find container \"4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b\": container with ID starting with 4e3bb1f9d23bb6902730faafa45aa0a2af55bb720b4b6492ab7f415cd34b1c0b not found: ID does not exist" Oct 06 15:57:20 crc kubenswrapper[4888]: I1006 15:57:20.935420 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" path="/var/lib/kubelet/pods/706152cc-8cbe-4ffb-b03a-9d7bd0c264e8/volumes" Oct 06 15:57:32 crc kubenswrapper[4888]: I1006 15:57:32.563301 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:57:32 crc kubenswrapper[4888]: I1006 15:57:32.563764 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 15:57:32 crc kubenswrapper[4888]: I1006 15:57:32.563834 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 15:57:32 crc kubenswrapper[4888]: I1006 15:57:32.564394 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c496c684f1457104a7366ebe69cbaf0b18059bb0775845d157706bb46b452d83"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 15:57:32 crc kubenswrapper[4888]: I1006 15:57:32.564457 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://c496c684f1457104a7366ebe69cbaf0b18059bb0775845d157706bb46b452d83" gracePeriod=600 Oct 06 15:57:33 crc kubenswrapper[4888]: I1006 15:57:33.339776 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="c496c684f1457104a7366ebe69cbaf0b18059bb0775845d157706bb46b452d83" exitCode=0 Oct 06 15:57:33 crc kubenswrapper[4888]: I1006 15:57:33.339846 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"c496c684f1457104a7366ebe69cbaf0b18059bb0775845d157706bb46b452d83"} Oct 06 15:57:33 crc kubenswrapper[4888]: I1006 15:57:33.340382 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee"} Oct 06 15:57:33 crc kubenswrapper[4888]: I1006 15:57:33.340419 4888 scope.go:117] "RemoveContainer" containerID="9a9a759958b825a3e90fa553e69a54f60d9b41775dcc8d9eabbf18d6132126ce" Oct 06 15:59:32 crc kubenswrapper[4888]: I1006 15:59:32.563901 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 15:59:32 crc kubenswrapper[4888]: I1006 15:59:32.564540 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.179673 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb"] Oct 06 16:00:00 crc kubenswrapper[4888]: E1006 16:00:00.187632 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" containerName="extract-utilities" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.187831 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" containerName="extract-utilities" Oct 06 16:00:00 crc kubenswrapper[4888]: E1006 16:00:00.187940 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerName="extract-content" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.187952 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerName="extract-content" Oct 06 16:00:00 crc kubenswrapper[4888]: E1006 16:00:00.188014 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" containerName="extract-content" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.188032 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" containerName="extract-content" Oct 06 16:00:00 crc kubenswrapper[4888]: E1006 16:00:00.188070 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerName="extract-utilities" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.188079 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerName="extract-utilities" Oct 06 16:00:00 crc kubenswrapper[4888]: E1006 16:00:00.188100 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" containerName="registry-server" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.188109 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" containerName="registry-server" Oct 06 16:00:00 crc kubenswrapper[4888]: E1006 16:00:00.188151 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerName="registry-server" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.188160 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerName="registry-server" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.189140 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="706152cc-8cbe-4ffb-b03a-9d7bd0c264e8" containerName="registry-server" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.189208 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="facba4df-743e-4be2-8744-0bd67214dbc2" containerName="registry-server" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.190383 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.194037 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.198150 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.211580 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb"] Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.315593 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360526ca-01d2-4bae-84ce-c625eade5be4-secret-volume\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.315903 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8hf\" (UniqueName: \"kubernetes.io/projected/360526ca-01d2-4bae-84ce-c625eade5be4-kube-api-access-rw8hf\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.316054 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360526ca-01d2-4bae-84ce-c625eade5be4-config-volume\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.417549 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360526ca-01d2-4bae-84ce-c625eade5be4-config-volume\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.417655 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360526ca-01d2-4bae-84ce-c625eade5be4-secret-volume\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.417745 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw8hf\" (UniqueName: \"kubernetes.io/projected/360526ca-01d2-4bae-84ce-c625eade5be4-kube-api-access-rw8hf\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.418523 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360526ca-01d2-4bae-84ce-c625eade5be4-config-volume\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.426945 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360526ca-01d2-4bae-84ce-c625eade5be4-secret-volume\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.437529 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw8hf\" (UniqueName: \"kubernetes.io/projected/360526ca-01d2-4bae-84ce-c625eade5be4-kube-api-access-rw8hf\") pod \"collect-profiles-29329440-tswhb\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:00 crc kubenswrapper[4888]: I1006 16:00:00.554392 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:01 crc kubenswrapper[4888]: I1006 16:00:01.188931 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb"] Oct 06 16:00:01 crc kubenswrapper[4888]: I1006 16:00:01.577787 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" event={"ID":"360526ca-01d2-4bae-84ce-c625eade5be4","Type":"ContainerStarted","Data":"63b9575822f2cde600eb6f0735192df4b8e8c0eb5e2ced329171181f5c4f2419"} Oct 06 16:00:01 crc kubenswrapper[4888]: I1006 16:00:01.578163 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" event={"ID":"360526ca-01d2-4bae-84ce-c625eade5be4","Type":"ContainerStarted","Data":"8296cd1142b44b8b34c72b5d3969a33fbcac5c63c790c16c69734bbee6518587"} Oct 06 16:00:01 crc kubenswrapper[4888]: I1006 16:00:01.601019 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" podStartSLOduration=1.60099613 podStartE2EDuration="1.60099613s" podCreationTimestamp="2025-10-06 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 16:00:01.593982539 +0000 UTC m=+3541.406333257" watchObservedRunningTime="2025-10-06 16:00:01.60099613 +0000 UTC m=+3541.413346848" Oct 06 16:00:02 crc kubenswrapper[4888]: I1006 16:00:02.563762 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:00:02 crc kubenswrapper[4888]: I1006 16:00:02.564083 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:00:02 crc kubenswrapper[4888]: I1006 16:00:02.589479 4888 generic.go:334] "Generic (PLEG): container finished" podID="360526ca-01d2-4bae-84ce-c625eade5be4" containerID="63b9575822f2cde600eb6f0735192df4b8e8c0eb5e2ced329171181f5c4f2419" exitCode=0 Oct 06 16:00:02 crc kubenswrapper[4888]: I1006 16:00:02.589522 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" event={"ID":"360526ca-01d2-4bae-84ce-c625eade5be4","Type":"ContainerDied","Data":"63b9575822f2cde600eb6f0735192df4b8e8c0eb5e2ced329171181f5c4f2419"} Oct 06 16:00:03 crc kubenswrapper[4888]: I1006 16:00:03.931745 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.091995 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360526ca-01d2-4bae-84ce-c625eade5be4-config-volume\") pod \"360526ca-01d2-4bae-84ce-c625eade5be4\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.092206 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw8hf\" (UniqueName: \"kubernetes.io/projected/360526ca-01d2-4bae-84ce-c625eade5be4-kube-api-access-rw8hf\") pod \"360526ca-01d2-4bae-84ce-c625eade5be4\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.092228 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360526ca-01d2-4bae-84ce-c625eade5be4-secret-volume\") pod \"360526ca-01d2-4bae-84ce-c625eade5be4\" (UID: \"360526ca-01d2-4bae-84ce-c625eade5be4\") " Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.093050 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360526ca-01d2-4bae-84ce-c625eade5be4-config-volume" (OuterVolumeSpecName: "config-volume") pod "360526ca-01d2-4bae-84ce-c625eade5be4" (UID: "360526ca-01d2-4bae-84ce-c625eade5be4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.098372 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360526ca-01d2-4bae-84ce-c625eade5be4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "360526ca-01d2-4bae-84ce-c625eade5be4" (UID: "360526ca-01d2-4bae-84ce-c625eade5be4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.100054 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360526ca-01d2-4bae-84ce-c625eade5be4-kube-api-access-rw8hf" (OuterVolumeSpecName: "kube-api-access-rw8hf") pod "360526ca-01d2-4bae-84ce-c625eade5be4" (UID: "360526ca-01d2-4bae-84ce-c625eade5be4"). InnerVolumeSpecName "kube-api-access-rw8hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.194523 4888 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360526ca-01d2-4bae-84ce-c625eade5be4-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.194560 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw8hf\" (UniqueName: \"kubernetes.io/projected/360526ca-01d2-4bae-84ce-c625eade5be4-kube-api-access-rw8hf\") on node \"crc\" DevicePath \"\"" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.194576 4888 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/360526ca-01d2-4bae-84ce-c625eade5be4-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.253697 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz"] Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.259459 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329395-hddtz"] Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.607032 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" event={"ID":"360526ca-01d2-4bae-84ce-c625eade5be4","Type":"ContainerDied","Data":"8296cd1142b44b8b34c72b5d3969a33fbcac5c63c790c16c69734bbee6518587"} Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.607082 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329440-tswhb" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.607088 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8296cd1142b44b8b34c72b5d3969a33fbcac5c63c790c16c69734bbee6518587" Oct 06 16:00:04 crc kubenswrapper[4888]: I1006 16:00:04.932980 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4706e942-55aa-4d9b-a703-ab2566f31e8b" path="/var/lib/kubelet/pods/4706e942-55aa-4d9b-a703-ab2566f31e8b/volumes" Oct 06 16:00:13 crc kubenswrapper[4888]: I1006 16:00:13.545219 4888 scope.go:117] "RemoveContainer" containerID="fe63d22a61f9906b24d9f20ee40c7c6af974d0d8a40948153e06cbb9fec593dd" Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.564066 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.564646 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.564705 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.565773 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.565859 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" gracePeriod=600 Oct 06 16:00:32 crc kubenswrapper[4888]: E1006 16:00:32.695654 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.834539 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" exitCode=0 Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.834594 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee"} Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.834632 4888 scope.go:117] "RemoveContainer" containerID="c496c684f1457104a7366ebe69cbaf0b18059bb0775845d157706bb46b452d83" Oct 06 16:00:32 crc kubenswrapper[4888]: I1006 16:00:32.835456 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:00:32 crc kubenswrapper[4888]: E1006 16:00:32.835916 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:00:47 crc kubenswrapper[4888]: I1006 16:00:47.921999 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:00:47 crc kubenswrapper[4888]: E1006 16:00:47.922913 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.698995 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44sx2"] Oct 06 16:00:54 crc kubenswrapper[4888]: E1006 16:00:54.700351 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360526ca-01d2-4bae-84ce-c625eade5be4" containerName="collect-profiles" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.700372 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="360526ca-01d2-4bae-84ce-c625eade5be4" containerName="collect-profiles" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.700589 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="360526ca-01d2-4bae-84ce-c625eade5be4" containerName="collect-profiles" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.702577 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.707199 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44sx2"] Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.858059 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdslf\" (UniqueName: \"kubernetes.io/projected/10ee6c82-a6ee-46af-beb0-dfd57203c509-kube-api-access-cdslf\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.858149 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-utilities\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.858274 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-catalog-content\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.960266 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-utilities\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.960653 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-catalog-content\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.960851 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-utilities\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.960987 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdslf\" (UniqueName: \"kubernetes.io/projected/10ee6c82-a6ee-46af-beb0-dfd57203c509-kube-api-access-cdslf\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.961162 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-catalog-content\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:54 crc kubenswrapper[4888]: I1006 16:00:54.984713 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdslf\" (UniqueName: \"kubernetes.io/projected/10ee6c82-a6ee-46af-beb0-dfd57203c509-kube-api-access-cdslf\") pod \"redhat-marketplace-44sx2\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:55 crc kubenswrapper[4888]: I1006 16:00:55.030175 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:00:55 crc kubenswrapper[4888]: I1006 16:00:55.511772 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44sx2"] Oct 06 16:00:56 crc kubenswrapper[4888]: I1006 16:00:56.020160 4888 generic.go:334] "Generic (PLEG): container finished" podID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerID="d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96" exitCode=0 Oct 06 16:00:56 crc kubenswrapper[4888]: I1006 16:00:56.020264 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44sx2" event={"ID":"10ee6c82-a6ee-46af-beb0-dfd57203c509","Type":"ContainerDied","Data":"d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96"} Oct 06 16:00:56 crc kubenswrapper[4888]: I1006 16:00:56.020393 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44sx2" event={"ID":"10ee6c82-a6ee-46af-beb0-dfd57203c509","Type":"ContainerStarted","Data":"7fdad8227b9e6563c045959e793f59aff5f63800dd112d98f5557d10ca8dff57"} Oct 06 16:00:57 crc kubenswrapper[4888]: I1006 16:00:57.034791 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44sx2" event={"ID":"10ee6c82-a6ee-46af-beb0-dfd57203c509","Type":"ContainerStarted","Data":"93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35"} Oct 06 16:00:58 crc kubenswrapper[4888]: I1006 16:00:58.053585 4888 generic.go:334] "Generic (PLEG): container finished" podID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerID="93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35" exitCode=0 Oct 06 16:00:58 crc kubenswrapper[4888]: I1006 16:00:58.053646 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44sx2" event={"ID":"10ee6c82-a6ee-46af-beb0-dfd57203c509","Type":"ContainerDied","Data":"93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35"} Oct 06 16:00:59 crc kubenswrapper[4888]: I1006 16:00:59.067775 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44sx2" event={"ID":"10ee6c82-a6ee-46af-beb0-dfd57203c509","Type":"ContainerStarted","Data":"80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14"} Oct 06 16:00:59 crc kubenswrapper[4888]: I1006 16:00:59.087545 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44sx2" podStartSLOduration=2.57462705 podStartE2EDuration="5.08752781s" podCreationTimestamp="2025-10-06 16:00:54 +0000 UTC" firstStartedPulling="2025-10-06 16:00:56.022328079 +0000 UTC m=+3595.834678797" lastFinishedPulling="2025-10-06 16:00:58.535228839 +0000 UTC m=+3598.347579557" observedRunningTime="2025-10-06 16:00:59.083036767 +0000 UTC m=+3598.895387495" watchObservedRunningTime="2025-10-06 16:00:59.08752781 +0000 UTC m=+3598.899878528" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.146067 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29329441-v7tzm"] Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.147595 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.160619 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29329441-v7tzm"] Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.257600 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-config-data\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.257711 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-fernet-keys\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.257743 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtn4\" (UniqueName: \"kubernetes.io/projected/cd50e72b-66b9-49db-9a55-7aff1a106a03-kube-api-access-nhtn4\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.257768 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-combined-ca-bundle\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.359192 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhtn4\" (UniqueName: \"kubernetes.io/projected/cd50e72b-66b9-49db-9a55-7aff1a106a03-kube-api-access-nhtn4\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.359781 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-combined-ca-bundle\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.360074 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-config-data\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.360219 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-fernet-keys\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.367626 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-combined-ca-bundle\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.367944 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-config-data\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.371720 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-fernet-keys\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.377902 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhtn4\" (UniqueName: \"kubernetes.io/projected/cd50e72b-66b9-49db-9a55-7aff1a106a03-kube-api-access-nhtn4\") pod \"keystone-cron-29329441-v7tzm\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.467652 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.928446 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:01:00 crc kubenswrapper[4888]: E1006 16:01:00.929057 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:01:00 crc kubenswrapper[4888]: I1006 16:01:00.958674 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29329441-v7tzm"] Oct 06 16:01:01 crc kubenswrapper[4888]: I1006 16:01:01.083248 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329441-v7tzm" event={"ID":"cd50e72b-66b9-49db-9a55-7aff1a106a03","Type":"ContainerStarted","Data":"ff2a8b2d5cbdfc1f9573d61ef440a3cdd0a9d3ad6416474b696600fd41459071"} Oct 06 16:01:02 crc kubenswrapper[4888]: I1006 16:01:02.101774 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329441-v7tzm" event={"ID":"cd50e72b-66b9-49db-9a55-7aff1a106a03","Type":"ContainerStarted","Data":"0992642e0f62df43c8d6449e70a838d0a14d3f111bdc1b1e4c3954c027a62a54"} Oct 06 16:01:02 crc kubenswrapper[4888]: I1006 16:01:02.129992 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29329441-v7tzm" podStartSLOduration=2.129968938 podStartE2EDuration="2.129968938s" podCreationTimestamp="2025-10-06 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 16:01:02.120097364 +0000 UTC m=+3601.932448082" watchObservedRunningTime="2025-10-06 16:01:02.129968938 +0000 UTC m=+3601.942319656" Oct 06 16:01:05 crc kubenswrapper[4888]: I1006 16:01:05.031354 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:01:05 crc kubenswrapper[4888]: I1006 16:01:05.031987 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:01:05 crc kubenswrapper[4888]: I1006 16:01:05.077630 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:01:05 crc kubenswrapper[4888]: I1006 16:01:05.127824 4888 generic.go:334] "Generic (PLEG): container finished" podID="cd50e72b-66b9-49db-9a55-7aff1a106a03" containerID="0992642e0f62df43c8d6449e70a838d0a14d3f111bdc1b1e4c3954c027a62a54" exitCode=0 Oct 06 16:01:05 crc kubenswrapper[4888]: I1006 16:01:05.127917 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329441-v7tzm" event={"ID":"cd50e72b-66b9-49db-9a55-7aff1a106a03","Type":"ContainerDied","Data":"0992642e0f62df43c8d6449e70a838d0a14d3f111bdc1b1e4c3954c027a62a54"} Oct 06 16:01:05 crc kubenswrapper[4888]: I1006 16:01:05.173216 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:01:05 crc kubenswrapper[4888]: I1006 16:01:05.308446 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44sx2"] Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.453606 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.583384 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-fernet-keys\") pod \"cd50e72b-66b9-49db-9a55-7aff1a106a03\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.583459 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-config-data\") pod \"cd50e72b-66b9-49db-9a55-7aff1a106a03\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.583495 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhtn4\" (UniqueName: \"kubernetes.io/projected/cd50e72b-66b9-49db-9a55-7aff1a106a03-kube-api-access-nhtn4\") pod \"cd50e72b-66b9-49db-9a55-7aff1a106a03\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.583529 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-combined-ca-bundle\") pod \"cd50e72b-66b9-49db-9a55-7aff1a106a03\" (UID: \"cd50e72b-66b9-49db-9a55-7aff1a106a03\") " Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.625101 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd50e72b-66b9-49db-9a55-7aff1a106a03-kube-api-access-nhtn4" (OuterVolumeSpecName: "kube-api-access-nhtn4") pod "cd50e72b-66b9-49db-9a55-7aff1a106a03" (UID: "cd50e72b-66b9-49db-9a55-7aff1a106a03"). InnerVolumeSpecName "kube-api-access-nhtn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.625263 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cd50e72b-66b9-49db-9a55-7aff1a106a03" (UID: "cd50e72b-66b9-49db-9a55-7aff1a106a03"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.632158 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd50e72b-66b9-49db-9a55-7aff1a106a03" (UID: "cd50e72b-66b9-49db-9a55-7aff1a106a03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.663015 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-config-data" (OuterVolumeSpecName: "config-data") pod "cd50e72b-66b9-49db-9a55-7aff1a106a03" (UID: "cd50e72b-66b9-49db-9a55-7aff1a106a03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.685672 4888 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.685708 4888 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.685718 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhtn4\" (UniqueName: \"kubernetes.io/projected/cd50e72b-66b9-49db-9a55-7aff1a106a03-kube-api-access-nhtn4\") on node \"crc\" DevicePath \"\"" Oct 06 16:01:06 crc kubenswrapper[4888]: I1006 16:01:06.685729 4888 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50e72b-66b9-49db-9a55-7aff1a106a03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.152599 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-44sx2" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerName="registry-server" containerID="cri-o://80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14" gracePeriod=2 Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.153140 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29329441-v7tzm" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.153997 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29329441-v7tzm" event={"ID":"cd50e72b-66b9-49db-9a55-7aff1a106a03","Type":"ContainerDied","Data":"ff2a8b2d5cbdfc1f9573d61ef440a3cdd0a9d3ad6416474b696600fd41459071"} Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.154029 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff2a8b2d5cbdfc1f9573d61ef440a3cdd0a9d3ad6416474b696600fd41459071" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.652343 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.805482 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-catalog-content\") pod \"10ee6c82-a6ee-46af-beb0-dfd57203c509\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.805843 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-utilities\") pod \"10ee6c82-a6ee-46af-beb0-dfd57203c509\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.806113 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdslf\" (UniqueName: \"kubernetes.io/projected/10ee6c82-a6ee-46af-beb0-dfd57203c509-kube-api-access-cdslf\") pod \"10ee6c82-a6ee-46af-beb0-dfd57203c509\" (UID: \"10ee6c82-a6ee-46af-beb0-dfd57203c509\") " Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.806652 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-utilities" (OuterVolumeSpecName: "utilities") pod "10ee6c82-a6ee-46af-beb0-dfd57203c509" (UID: "10ee6c82-a6ee-46af-beb0-dfd57203c509"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.810637 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ee6c82-a6ee-46af-beb0-dfd57203c509-kube-api-access-cdslf" (OuterVolumeSpecName: "kube-api-access-cdslf") pod "10ee6c82-a6ee-46af-beb0-dfd57203c509" (UID: "10ee6c82-a6ee-46af-beb0-dfd57203c509"). InnerVolumeSpecName "kube-api-access-cdslf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.821063 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10ee6c82-a6ee-46af-beb0-dfd57203c509" (UID: "10ee6c82-a6ee-46af-beb0-dfd57203c509"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.908696 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.908725 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ee6c82-a6ee-46af-beb0-dfd57203c509-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:01:07 crc kubenswrapper[4888]: I1006 16:01:07.908737 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdslf\" (UniqueName: \"kubernetes.io/projected/10ee6c82-a6ee-46af-beb0-dfd57203c509-kube-api-access-cdslf\") on node \"crc\" DevicePath \"\"" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.164525 4888 generic.go:334] "Generic (PLEG): container finished" podID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerID="80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14" exitCode=0 Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.164566 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44sx2" event={"ID":"10ee6c82-a6ee-46af-beb0-dfd57203c509","Type":"ContainerDied","Data":"80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14"} Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.164591 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44sx2" event={"ID":"10ee6c82-a6ee-46af-beb0-dfd57203c509","Type":"ContainerDied","Data":"7fdad8227b9e6563c045959e793f59aff5f63800dd112d98f5557d10ca8dff57"} Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.164607 4888 scope.go:117] "RemoveContainer" containerID="80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.164957 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44sx2" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.196942 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44sx2"] Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.202350 4888 scope.go:117] "RemoveContainer" containerID="93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.203634 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-44sx2"] Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.218373 4888 scope.go:117] "RemoveContainer" containerID="d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.263653 4888 scope.go:117] "RemoveContainer" containerID="80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14" Oct 06 16:01:08 crc kubenswrapper[4888]: E1006 16:01:08.264192 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14\": container with ID starting with 80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14 not found: ID does not exist" containerID="80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.264243 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14"} err="failed to get container status \"80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14\": rpc error: code = NotFound desc = could not find container \"80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14\": container with ID starting with 80ccbb716e9980fe925a03a3ce77fb374a2e6dc072303dd408f314f15546ce14 not found: ID does not exist" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.264278 4888 scope.go:117] "RemoveContainer" containerID="93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35" Oct 06 16:01:08 crc kubenswrapper[4888]: E1006 16:01:08.264934 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35\": container with ID starting with 93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35 not found: ID does not exist" containerID="93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.264961 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35"} err="failed to get container status \"93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35\": rpc error: code = NotFound desc = could not find container \"93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35\": container with ID starting with 93d0d5aa60526d124d60dc28f9a366b63ed4007cb7a729eeaf25b6b4b0128f35 not found: ID does not exist" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.264978 4888 scope.go:117] "RemoveContainer" containerID="d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96" Oct 06 16:01:08 crc kubenswrapper[4888]: E1006 16:01:08.265289 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96\": container with ID starting with d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96 not found: ID does not exist" containerID="d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.265325 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96"} err="failed to get container status \"d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96\": rpc error: code = NotFound desc = could not find container \"d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96\": container with ID starting with d73bc31d96ea6d156e2aa29e9dbb5ca26ac71a418d7b2f03ea4c24d8fad53b96 not found: ID does not exist" Oct 06 16:01:08 crc kubenswrapper[4888]: I1006 16:01:08.934972 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" path="/var/lib/kubelet/pods/10ee6c82-a6ee-46af-beb0-dfd57203c509/volumes" Oct 06 16:01:12 crc kubenswrapper[4888]: I1006 16:01:12.921782 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:01:12 crc kubenswrapper[4888]: E1006 16:01:12.922665 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:01:26 crc kubenswrapper[4888]: I1006 16:01:26.920895 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:01:26 crc kubenswrapper[4888]: E1006 16:01:26.921663 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:01:41 crc kubenswrapper[4888]: I1006 16:01:41.922366 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:01:41 crc kubenswrapper[4888]: E1006 16:01:41.923442 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:01:56 crc kubenswrapper[4888]: I1006 16:01:56.921468 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:01:56 crc kubenswrapper[4888]: E1006 16:01:56.923150 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:02:08 crc kubenswrapper[4888]: I1006 16:02:08.922641 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:02:08 crc kubenswrapper[4888]: E1006 16:02:08.923725 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:02:23 crc kubenswrapper[4888]: I1006 16:02:23.921743 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:02:23 crc kubenswrapper[4888]: E1006 16:02:23.922745 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:02:38 crc kubenswrapper[4888]: I1006 16:02:38.925638 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:02:38 crc kubenswrapper[4888]: E1006 16:02:38.931267 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:02:49 crc kubenswrapper[4888]: I1006 16:02:49.921875 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:02:49 crc kubenswrapper[4888]: E1006 16:02:49.922628 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:03:00 crc kubenswrapper[4888]: I1006 16:03:00.928945 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:03:00 crc kubenswrapper[4888]: E1006 16:03:00.929754 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:03:13 crc kubenswrapper[4888]: I1006 16:03:13.921442 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:03:13 crc kubenswrapper[4888]: E1006 16:03:13.922172 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.495079 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z4lg8"] Oct 06 16:03:18 crc kubenswrapper[4888]: E1006 16:03:18.495937 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerName="extract-utilities" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.495951 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerName="extract-utilities" Oct 06 16:03:18 crc kubenswrapper[4888]: E1006 16:03:18.495972 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd50e72b-66b9-49db-9a55-7aff1a106a03" containerName="keystone-cron" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.495977 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd50e72b-66b9-49db-9a55-7aff1a106a03" containerName="keystone-cron" Oct 06 16:03:18 crc kubenswrapper[4888]: E1006 16:03:18.495991 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerName="registry-server" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.495997 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerName="registry-server" Oct 06 16:03:18 crc kubenswrapper[4888]: E1006 16:03:18.496013 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerName="extract-content" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.496019 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerName="extract-content" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.496199 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ee6c82-a6ee-46af-beb0-dfd57203c509" containerName="registry-server" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.496223 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd50e72b-66b9-49db-9a55-7aff1a106a03" containerName="keystone-cron" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.497846 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.503070 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4lg8"] Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.587192 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-catalog-content\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.587319 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbc8\" (UniqueName: \"kubernetes.io/projected/8aece788-8bf4-4098-8513-47ee7dd439b6-kube-api-access-8zbc8\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.587366 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-utilities\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.688699 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zbc8\" (UniqueName: \"kubernetes.io/projected/8aece788-8bf4-4098-8513-47ee7dd439b6-kube-api-access-8zbc8\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.689044 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-utilities\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.689108 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-catalog-content\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.689594 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-catalog-content\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.689591 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-utilities\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.707601 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zbc8\" (UniqueName: \"kubernetes.io/projected/8aece788-8bf4-4098-8513-47ee7dd439b6-kube-api-access-8zbc8\") pod \"redhat-operators-z4lg8\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:18 crc kubenswrapper[4888]: I1006 16:03:18.876674 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:19 crc kubenswrapper[4888]: I1006 16:03:19.392920 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4lg8"] Oct 06 16:03:20 crc kubenswrapper[4888]: I1006 16:03:20.300191 4888 generic.go:334] "Generic (PLEG): container finished" podID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerID="bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9" exitCode=0 Oct 06 16:03:20 crc kubenswrapper[4888]: I1006 16:03:20.300298 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4lg8" event={"ID":"8aece788-8bf4-4098-8513-47ee7dd439b6","Type":"ContainerDied","Data":"bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9"} Oct 06 16:03:20 crc kubenswrapper[4888]: I1006 16:03:20.302270 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4lg8" event={"ID":"8aece788-8bf4-4098-8513-47ee7dd439b6","Type":"ContainerStarted","Data":"48f04cbd8b656f0e3655b2bdb25c8c107a35d39b478b5e958c35fa4cc5589222"} Oct 06 16:03:20 crc kubenswrapper[4888]: I1006 16:03:20.302445 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 16:03:22 crc kubenswrapper[4888]: I1006 16:03:22.321969 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4lg8" event={"ID":"8aece788-8bf4-4098-8513-47ee7dd439b6","Type":"ContainerStarted","Data":"9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353"} Oct 06 16:03:24 crc kubenswrapper[4888]: I1006 16:03:24.337260 4888 generic.go:334] "Generic (PLEG): container finished" podID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerID="9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353" exitCode=0 Oct 06 16:03:24 crc kubenswrapper[4888]: I1006 16:03:24.337341 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4lg8" event={"ID":"8aece788-8bf4-4098-8513-47ee7dd439b6","Type":"ContainerDied","Data":"9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353"} Oct 06 16:03:25 crc kubenswrapper[4888]: I1006 16:03:25.346906 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4lg8" event={"ID":"8aece788-8bf4-4098-8513-47ee7dd439b6","Type":"ContainerStarted","Data":"9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee"} Oct 06 16:03:25 crc kubenswrapper[4888]: I1006 16:03:25.371420 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z4lg8" podStartSLOduration=2.739295586 podStartE2EDuration="7.371401258s" podCreationTimestamp="2025-10-06 16:03:18 +0000 UTC" firstStartedPulling="2025-10-06 16:03:20.302222297 +0000 UTC m=+3740.114573015" lastFinishedPulling="2025-10-06 16:03:24.934327969 +0000 UTC m=+3744.746678687" observedRunningTime="2025-10-06 16:03:25.364776277 +0000 UTC m=+3745.177127015" watchObservedRunningTime="2025-10-06 16:03:25.371401258 +0000 UTC m=+3745.183751976" Oct 06 16:03:25 crc kubenswrapper[4888]: I1006 16:03:25.921107 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:03:25 crc kubenswrapper[4888]: E1006 16:03:25.921400 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:03:28 crc kubenswrapper[4888]: I1006 16:03:28.877733 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:28 crc kubenswrapper[4888]: I1006 16:03:28.878055 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:29 crc kubenswrapper[4888]: I1006 16:03:29.924891 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z4lg8" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="registry-server" probeResult="failure" output=< Oct 06 16:03:29 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 16:03:29 crc kubenswrapper[4888]: > Oct 06 16:03:37 crc kubenswrapper[4888]: I1006 16:03:37.921694 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:03:37 crc kubenswrapper[4888]: E1006 16:03:37.922507 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:03:38 crc kubenswrapper[4888]: I1006 16:03:38.932601 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:38 crc kubenswrapper[4888]: I1006 16:03:38.980546 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:39 crc kubenswrapper[4888]: I1006 16:03:39.173688 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z4lg8"] Oct 06 16:03:40 crc kubenswrapper[4888]: I1006 16:03:40.459474 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z4lg8" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="registry-server" containerID="cri-o://9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee" gracePeriod=2 Oct 06 16:03:40 crc kubenswrapper[4888]: I1006 16:03:40.912431 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.015528 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zbc8\" (UniqueName: \"kubernetes.io/projected/8aece788-8bf4-4098-8513-47ee7dd439b6-kube-api-access-8zbc8\") pod \"8aece788-8bf4-4098-8513-47ee7dd439b6\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.015857 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-catalog-content\") pod \"8aece788-8bf4-4098-8513-47ee7dd439b6\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.015979 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-utilities\") pod \"8aece788-8bf4-4098-8513-47ee7dd439b6\" (UID: \"8aece788-8bf4-4098-8513-47ee7dd439b6\") " Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.018472 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-utilities" (OuterVolumeSpecName: "utilities") pod "8aece788-8bf4-4098-8513-47ee7dd439b6" (UID: "8aece788-8bf4-4098-8513-47ee7dd439b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.022095 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aece788-8bf4-4098-8513-47ee7dd439b6-kube-api-access-8zbc8" (OuterVolumeSpecName: "kube-api-access-8zbc8") pod "8aece788-8bf4-4098-8513-47ee7dd439b6" (UID: "8aece788-8bf4-4098-8513-47ee7dd439b6"). InnerVolumeSpecName "kube-api-access-8zbc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.118518 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zbc8\" (UniqueName: \"kubernetes.io/projected/8aece788-8bf4-4098-8513-47ee7dd439b6-kube-api-access-8zbc8\") on node \"crc\" DevicePath \"\"" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.118559 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.121861 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8aece788-8bf4-4098-8513-47ee7dd439b6" (UID: "8aece788-8bf4-4098-8513-47ee7dd439b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.220655 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aece788-8bf4-4098-8513-47ee7dd439b6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.469826 4888 generic.go:334] "Generic (PLEG): container finished" podID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerID="9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee" exitCode=0 Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.469884 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4lg8" event={"ID":"8aece788-8bf4-4098-8513-47ee7dd439b6","Type":"ContainerDied","Data":"9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee"} Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.469916 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4lg8" event={"ID":"8aece788-8bf4-4098-8513-47ee7dd439b6","Type":"ContainerDied","Data":"48f04cbd8b656f0e3655b2bdb25c8c107a35d39b478b5e958c35fa4cc5589222"} Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.469938 4888 scope.go:117] "RemoveContainer" containerID="9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.470114 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4lg8" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.507104 4888 scope.go:117] "RemoveContainer" containerID="9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.508677 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z4lg8"] Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.518017 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z4lg8"] Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.533277 4888 scope.go:117] "RemoveContainer" containerID="bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.572228 4888 scope.go:117] "RemoveContainer" containerID="9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee" Oct 06 16:03:41 crc kubenswrapper[4888]: E1006 16:03:41.572630 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee\": container with ID starting with 9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee not found: ID does not exist" containerID="9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.572663 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee"} err="failed to get container status \"9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee\": rpc error: code = NotFound desc = could not find container \"9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee\": container with ID starting with 9c39b5bd106a29153bb450d1eb949446a348060adde34d3bde321280c9a53eee not found: ID does not exist" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.572692 4888 scope.go:117] "RemoveContainer" containerID="9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353" Oct 06 16:03:41 crc kubenswrapper[4888]: E1006 16:03:41.573074 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353\": container with ID starting with 9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353 not found: ID does not exist" containerID="9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.573129 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353"} err="failed to get container status \"9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353\": rpc error: code = NotFound desc = could not find container \"9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353\": container with ID starting with 9272d6d5dbcf6883d960b45113edecbbf6b5a862b05e4cb0c6a5e7ffd1138353 not found: ID does not exist" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.573162 4888 scope.go:117] "RemoveContainer" containerID="bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9" Oct 06 16:03:41 crc kubenswrapper[4888]: E1006 16:03:41.573506 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9\": container with ID starting with bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9 not found: ID does not exist" containerID="bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9" Oct 06 16:03:41 crc kubenswrapper[4888]: I1006 16:03:41.573541 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9"} err="failed to get container status \"bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9\": rpc error: code = NotFound desc = could not find container \"bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9\": container with ID starting with bb529c6afc026bf69c87cc0c5b1729c56fd6bb9d72ada581569483b273f40fd9 not found: ID does not exist" Oct 06 16:03:42 crc kubenswrapper[4888]: I1006 16:03:42.945334 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" path="/var/lib/kubelet/pods/8aece788-8bf4-4098-8513-47ee7dd439b6/volumes" Oct 06 16:03:49 crc kubenswrapper[4888]: I1006 16:03:49.921584 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:03:49 crc kubenswrapper[4888]: E1006 16:03:49.922340 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:04:01 crc kubenswrapper[4888]: I1006 16:04:01.921277 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:04:01 crc kubenswrapper[4888]: E1006 16:04:01.922109 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:04:13 crc kubenswrapper[4888]: I1006 16:04:13.921454 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:04:13 crc kubenswrapper[4888]: E1006 16:04:13.922211 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:04:27 crc kubenswrapper[4888]: I1006 16:04:27.921504 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:04:27 crc kubenswrapper[4888]: E1006 16:04:27.922473 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:04:38 crc kubenswrapper[4888]: I1006 16:04:38.924543 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:04:38 crc kubenswrapper[4888]: E1006 16:04:38.925339 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:04:52 crc kubenswrapper[4888]: I1006 16:04:52.922874 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:04:52 crc kubenswrapper[4888]: E1006 16:04:52.923681 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:05:03 crc kubenswrapper[4888]: I1006 16:05:03.921395 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:05:03 crc kubenswrapper[4888]: E1006 16:05:03.922285 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:05:18 crc kubenswrapper[4888]: I1006 16:05:18.922071 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:05:18 crc kubenswrapper[4888]: E1006 16:05:18.923017 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:05:29 crc kubenswrapper[4888]: I1006 16:05:29.921957 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:05:29 crc kubenswrapper[4888]: E1006 16:05:29.922962 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:05:44 crc kubenswrapper[4888]: I1006 16:05:44.921447 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:05:45 crc kubenswrapper[4888]: I1006 16:05:45.522420 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"737ca743d7735cfdf3f73e3ae9e6c4d45a1e0c254a9a8379eaf3e0d71d605811"} Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.056415 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9s7pn"] Oct 06 16:07:52 crc kubenswrapper[4888]: E1006 16:07:52.057377 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="extract-utilities" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.057388 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="extract-utilities" Oct 06 16:07:52 crc kubenswrapper[4888]: E1006 16:07:52.057424 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="registry-server" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.057430 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="registry-server" Oct 06 16:07:52 crc kubenswrapper[4888]: E1006 16:07:52.057444 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="extract-content" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.057450 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="extract-content" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.057610 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aece788-8bf4-4098-8513-47ee7dd439b6" containerName="registry-server" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.058932 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.097824 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9s7pn"] Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.222540 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2mf\" (UniqueName: \"kubernetes.io/projected/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-kube-api-access-hr2mf\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.222912 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-utilities\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.223093 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-catalog-content\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.324405 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-catalog-content\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.324506 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2mf\" (UniqueName: \"kubernetes.io/projected/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-kube-api-access-hr2mf\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.324583 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-utilities\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.324971 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-catalog-content\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.324990 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-utilities\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.344173 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2mf\" (UniqueName: \"kubernetes.io/projected/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-kube-api-access-hr2mf\") pod \"certified-operators-9s7pn\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.380636 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:07:52 crc kubenswrapper[4888]: I1006 16:07:52.732608 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9s7pn"] Oct 06 16:07:52 crc kubenswrapper[4888]: W1006 16:07:52.746234 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c7aae8_144f_42d4_aa07_88f9bc6c8c7a.slice/crio-acd0934d7c0ef2b150f639ac21352322f3cc2d8dd6bbef3b9e59b4b7f1edc9bc WatchSource:0}: Error finding container acd0934d7c0ef2b150f639ac21352322f3cc2d8dd6bbef3b9e59b4b7f1edc9bc: Status 404 returned error can't find the container with id acd0934d7c0ef2b150f639ac21352322f3cc2d8dd6bbef3b9e59b4b7f1edc9bc Oct 06 16:07:53 crc kubenswrapper[4888]: I1006 16:07:53.535363 4888 generic.go:334] "Generic (PLEG): container finished" podID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerID="c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e" exitCode=0 Oct 06 16:07:53 crc kubenswrapper[4888]: I1006 16:07:53.535460 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s7pn" event={"ID":"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a","Type":"ContainerDied","Data":"c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e"} Oct 06 16:07:53 crc kubenswrapper[4888]: I1006 16:07:53.535677 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s7pn" event={"ID":"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a","Type":"ContainerStarted","Data":"acd0934d7c0ef2b150f639ac21352322f3cc2d8dd6bbef3b9e59b4b7f1edc9bc"} Oct 06 16:07:54 crc kubenswrapper[4888]: I1006 16:07:54.544517 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s7pn" event={"ID":"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a","Type":"ContainerStarted","Data":"3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9"} Oct 06 16:07:55 crc kubenswrapper[4888]: I1006 16:07:55.554303 4888 generic.go:334] "Generic (PLEG): container finished" podID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerID="3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9" exitCode=0 Oct 06 16:07:55 crc kubenswrapper[4888]: I1006 16:07:55.554348 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s7pn" event={"ID":"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a","Type":"ContainerDied","Data":"3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9"} Oct 06 16:07:56 crc kubenswrapper[4888]: I1006 16:07:56.565247 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s7pn" event={"ID":"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a","Type":"ContainerStarted","Data":"c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789"} Oct 06 16:07:56 crc kubenswrapper[4888]: I1006 16:07:56.597431 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9s7pn" podStartSLOduration=2.071809537 podStartE2EDuration="4.597406636s" podCreationTimestamp="2025-10-06 16:07:52 +0000 UTC" firstStartedPulling="2025-10-06 16:07:53.537370806 +0000 UTC m=+4013.349721524" lastFinishedPulling="2025-10-06 16:07:56.062967905 +0000 UTC m=+4015.875318623" observedRunningTime="2025-10-06 16:07:56.581613555 +0000 UTC m=+4016.393964273" watchObservedRunningTime="2025-10-06 16:07:56.597406636 +0000 UTC m=+4016.409757354" Oct 06 16:08:02 crc kubenswrapper[4888]: I1006 16:08:02.381557 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:08:02 crc kubenswrapper[4888]: I1006 16:08:02.382009 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:08:02 crc kubenswrapper[4888]: I1006 16:08:02.564194 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:08:02 crc kubenswrapper[4888]: I1006 16:08:02.564257 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:08:03 crc kubenswrapper[4888]: I1006 16:08:03.014878 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:08:03 crc kubenswrapper[4888]: I1006 16:08:03.061936 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:08:03 crc kubenswrapper[4888]: I1006 16:08:03.252008 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9s7pn"] Oct 06 16:08:04 crc kubenswrapper[4888]: I1006 16:08:04.628305 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9s7pn" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerName="registry-server" containerID="cri-o://c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789" gracePeriod=2 Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.048178 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.168375 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-catalog-content\") pod \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.168414 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr2mf\" (UniqueName: \"kubernetes.io/projected/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-kube-api-access-hr2mf\") pod \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.168438 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-utilities\") pod \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\" (UID: \"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a\") " Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.169459 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-utilities" (OuterVolumeSpecName: "utilities") pod "d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" (UID: "d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.173921 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-kube-api-access-hr2mf" (OuterVolumeSpecName: "kube-api-access-hr2mf") pod "d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" (UID: "d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a"). InnerVolumeSpecName "kube-api-access-hr2mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.211500 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" (UID: "d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.271155 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.271201 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr2mf\" (UniqueName: \"kubernetes.io/projected/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-kube-api-access-hr2mf\") on node \"crc\" DevicePath \"\"" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.271212 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.642947 4888 generic.go:334] "Generic (PLEG): container finished" podID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerID="c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789" exitCode=0 Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.643003 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s7pn" event={"ID":"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a","Type":"ContainerDied","Data":"c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789"} Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.643039 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9s7pn" event={"ID":"d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a","Type":"ContainerDied","Data":"acd0934d7c0ef2b150f639ac21352322f3cc2d8dd6bbef3b9e59b4b7f1edc9bc"} Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.643062 4888 scope.go:117] "RemoveContainer" containerID="c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.644498 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9s7pn" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.687330 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9s7pn"] Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.706868 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9s7pn"] Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.712458 4888 scope.go:117] "RemoveContainer" containerID="3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.744204 4888 scope.go:117] "RemoveContainer" containerID="c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.788305 4888 scope.go:117] "RemoveContainer" containerID="c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789" Oct 06 16:08:05 crc kubenswrapper[4888]: E1006 16:08:05.790828 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789\": container with ID starting with c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789 not found: ID does not exist" containerID="c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.790878 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789"} err="failed to get container status \"c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789\": rpc error: code = NotFound desc = could not find container \"c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789\": container with ID starting with c4515b5120a98037d5dcbd4998849a500e7d7ba5aff36620cd53be48f823a789 not found: ID does not exist" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.790909 4888 scope.go:117] "RemoveContainer" containerID="3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9" Oct 06 16:08:05 crc kubenswrapper[4888]: E1006 16:08:05.791247 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9\": container with ID starting with 3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9 not found: ID does not exist" containerID="3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.791274 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9"} err="failed to get container status \"3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9\": rpc error: code = NotFound desc = could not find container \"3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9\": container with ID starting with 3c3ca4621bdfcdaef2de3108c7226b2c85d785958e715c71d7ea6ae60e9dc7d9 not found: ID does not exist" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.791300 4888 scope.go:117] "RemoveContainer" containerID="c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e" Oct 06 16:08:05 crc kubenswrapper[4888]: E1006 16:08:05.791580 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e\": container with ID starting with c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e not found: ID does not exist" containerID="c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e" Oct 06 16:08:05 crc kubenswrapper[4888]: I1006 16:08:05.791606 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e"} err="failed to get container status \"c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e\": rpc error: code = NotFound desc = could not find container \"c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e\": container with ID starting with c51965860b23fe2d7ab5bb13d95f5575bf776efd2d35bb89eeb563da9e6ef59e not found: ID does not exist" Oct 06 16:08:05 crc kubenswrapper[4888]: E1006 16:08:05.896890 4888 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c7aae8_144f_42d4_aa07_88f9bc6c8c7a.slice/crio-acd0934d7c0ef2b150f639ac21352322f3cc2d8dd6bbef3b9e59b4b7f1edc9bc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c7aae8_144f_42d4_aa07_88f9bc6c8c7a.slice\": RecentStats: unable to find data in memory cache]" Oct 06 16:08:06 crc kubenswrapper[4888]: I1006 16:08:06.933133 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" path="/var/lib/kubelet/pods/d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a/volumes" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.462054 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pjmkw"] Oct 06 16:08:07 crc kubenswrapper[4888]: E1006 16:08:07.462455 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerName="extract-utilities" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.462475 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerName="extract-utilities" Oct 06 16:08:07 crc kubenswrapper[4888]: E1006 16:08:07.462489 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerName="extract-content" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.462497 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerName="extract-content" Oct 06 16:08:07 crc kubenswrapper[4888]: E1006 16:08:07.462517 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerName="registry-server" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.462524 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerName="registry-server" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.462823 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c7aae8-144f-42d4-aa07-88f9bc6c8c7a" containerName="registry-server" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.464468 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.475103 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjmkw"] Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.521469 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-catalog-content\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.521776 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-utilities\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.521876 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb955\" (UniqueName: \"kubernetes.io/projected/1bd340af-8988-4eab-8492-7ae135b0c14b-kube-api-access-wb955\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.623777 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-catalog-content\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.623929 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-utilities\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.623967 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb955\" (UniqueName: \"kubernetes.io/projected/1bd340af-8988-4eab-8492-7ae135b0c14b-kube-api-access-wb955\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.624470 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-utilities\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.624696 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-catalog-content\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.645317 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb955\" (UniqueName: \"kubernetes.io/projected/1bd340af-8988-4eab-8492-7ae135b0c14b-kube-api-access-wb955\") pod \"community-operators-pjmkw\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:07 crc kubenswrapper[4888]: I1006 16:08:07.791895 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:08 crc kubenswrapper[4888]: I1006 16:08:08.384438 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjmkw"] Oct 06 16:08:08 crc kubenswrapper[4888]: I1006 16:08:08.671369 4888 generic.go:334] "Generic (PLEG): container finished" podID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerID="89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60" exitCode=0 Oct 06 16:08:08 crc kubenswrapper[4888]: I1006 16:08:08.671560 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjmkw" event={"ID":"1bd340af-8988-4eab-8492-7ae135b0c14b","Type":"ContainerDied","Data":"89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60"} Oct 06 16:08:08 crc kubenswrapper[4888]: I1006 16:08:08.671861 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjmkw" event={"ID":"1bd340af-8988-4eab-8492-7ae135b0c14b","Type":"ContainerStarted","Data":"e8eaaf4670a0b10d84ee6c6d4075fa46d94415100a59c152b210889776148cfd"} Oct 06 16:08:09 crc kubenswrapper[4888]: I1006 16:08:09.682225 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjmkw" event={"ID":"1bd340af-8988-4eab-8492-7ae135b0c14b","Type":"ContainerStarted","Data":"9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868"} Oct 06 16:08:10 crc kubenswrapper[4888]: I1006 16:08:10.690739 4888 generic.go:334] "Generic (PLEG): container finished" podID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerID="9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868" exitCode=0 Oct 06 16:08:10 crc kubenswrapper[4888]: I1006 16:08:10.690822 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjmkw" event={"ID":"1bd340af-8988-4eab-8492-7ae135b0c14b","Type":"ContainerDied","Data":"9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868"} Oct 06 16:08:12 crc kubenswrapper[4888]: I1006 16:08:12.720880 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjmkw" event={"ID":"1bd340af-8988-4eab-8492-7ae135b0c14b","Type":"ContainerStarted","Data":"c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca"} Oct 06 16:08:12 crc kubenswrapper[4888]: I1006 16:08:12.760495 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pjmkw" podStartSLOduration=3.170237339 podStartE2EDuration="5.760463925s" podCreationTimestamp="2025-10-06 16:08:07 +0000 UTC" firstStartedPulling="2025-10-06 16:08:08.67273386 +0000 UTC m=+4028.485084578" lastFinishedPulling="2025-10-06 16:08:11.262960446 +0000 UTC m=+4031.075311164" observedRunningTime="2025-10-06 16:08:12.743080444 +0000 UTC m=+4032.555431162" watchObservedRunningTime="2025-10-06 16:08:12.760463925 +0000 UTC m=+4032.572814643" Oct 06 16:08:17 crc kubenswrapper[4888]: I1006 16:08:17.793413 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:17 crc kubenswrapper[4888]: I1006 16:08:17.794022 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:17 crc kubenswrapper[4888]: I1006 16:08:17.845091 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:18 crc kubenswrapper[4888]: I1006 16:08:18.814392 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:18 crc kubenswrapper[4888]: I1006 16:08:18.858818 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjmkw"] Oct 06 16:08:20 crc kubenswrapper[4888]: I1006 16:08:20.785672 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pjmkw" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerName="registry-server" containerID="cri-o://c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca" gracePeriod=2 Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.291090 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.395280 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-utilities\") pod \"1bd340af-8988-4eab-8492-7ae135b0c14b\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.395564 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb955\" (UniqueName: \"kubernetes.io/projected/1bd340af-8988-4eab-8492-7ae135b0c14b-kube-api-access-wb955\") pod \"1bd340af-8988-4eab-8492-7ae135b0c14b\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.395598 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-catalog-content\") pod \"1bd340af-8988-4eab-8492-7ae135b0c14b\" (UID: \"1bd340af-8988-4eab-8492-7ae135b0c14b\") " Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.396393 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-utilities" (OuterVolumeSpecName: "utilities") pod "1bd340af-8988-4eab-8492-7ae135b0c14b" (UID: "1bd340af-8988-4eab-8492-7ae135b0c14b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.451331 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bd340af-8988-4eab-8492-7ae135b0c14b" (UID: "1bd340af-8988-4eab-8492-7ae135b0c14b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.498470 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.498512 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd340af-8988-4eab-8492-7ae135b0c14b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.795582 4888 generic.go:334] "Generic (PLEG): container finished" podID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerID="c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca" exitCode=0 Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.795647 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjmkw" event={"ID":"1bd340af-8988-4eab-8492-7ae135b0c14b","Type":"ContainerDied","Data":"c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca"} Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.795659 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjmkw" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.795695 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjmkw" event={"ID":"1bd340af-8988-4eab-8492-7ae135b0c14b","Type":"ContainerDied","Data":"e8eaaf4670a0b10d84ee6c6d4075fa46d94415100a59c152b210889776148cfd"} Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.795724 4888 scope.go:117] "RemoveContainer" containerID="c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.819124 4888 scope.go:117] "RemoveContainer" containerID="9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.935263 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd340af-8988-4eab-8492-7ae135b0c14b-kube-api-access-wb955" (OuterVolumeSpecName: "kube-api-access-wb955") pod "1bd340af-8988-4eab-8492-7ae135b0c14b" (UID: "1bd340af-8988-4eab-8492-7ae135b0c14b"). InnerVolumeSpecName "kube-api-access-wb955". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:08:21 crc kubenswrapper[4888]: I1006 16:08:21.948119 4888 scope.go:117] "RemoveContainer" containerID="89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60" Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.011265 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb955\" (UniqueName: \"kubernetes.io/projected/1bd340af-8988-4eab-8492-7ae135b0c14b-kube-api-access-wb955\") on node \"crc\" DevicePath \"\"" Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.046767 4888 scope.go:117] "RemoveContainer" containerID="c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca" Oct 06 16:08:22 crc kubenswrapper[4888]: E1006 16:08:22.047193 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca\": container with ID starting with c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca not found: ID does not exist" containerID="c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca" Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.047262 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca"} err="failed to get container status \"c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca\": rpc error: code = NotFound desc = could not find container \"c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca\": container with ID starting with c0cc7f9dcf752e2acd5b2b5ffabd7845c9031d3994be978cee04015d004b8aca not found: ID does not exist" Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.047286 4888 scope.go:117] "RemoveContainer" containerID="9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868" Oct 06 16:08:22 crc kubenswrapper[4888]: E1006 16:08:22.047741 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868\": container with ID starting with 9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868 not found: ID does not exist" containerID="9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868" Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.047781 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868"} err="failed to get container status \"9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868\": rpc error: code = NotFound desc = could not find container \"9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868\": container with ID starting with 9263c283a4e5f2db7b5b65a3b93170c1ec697e4bfbfb6e60a1433e344c205868 not found: ID does not exist" Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.047861 4888 scope.go:117] "RemoveContainer" containerID="89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60" Oct 06 16:08:22 crc kubenswrapper[4888]: E1006 16:08:22.048276 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60\": container with ID starting with 89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60 not found: ID does not exist" containerID="89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60" Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.048324 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60"} err="failed to get container status \"89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60\": rpc error: code = NotFound desc = could not find container \"89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60\": container with ID starting with 89ce29df9b77a45a9d248f1033a03ee97f2dbdda966774c11fdf07e52d7c9d60 not found: ID does not exist" Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.139928 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjmkw"] Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.151226 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pjmkw"] Oct 06 16:08:22 crc kubenswrapper[4888]: I1006 16:08:22.942945 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" path="/var/lib/kubelet/pods/1bd340af-8988-4eab-8492-7ae135b0c14b/volumes" Oct 06 16:08:32 crc kubenswrapper[4888]: I1006 16:08:32.564011 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:08:32 crc kubenswrapper[4888]: I1006 16:08:32.564519 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:09:02 crc kubenswrapper[4888]: I1006 16:09:02.564191 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:09:02 crc kubenswrapper[4888]: I1006 16:09:02.564691 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:09:02 crc kubenswrapper[4888]: I1006 16:09:02.564732 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 16:09:02 crc kubenswrapper[4888]: I1006 16:09:02.565505 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"737ca743d7735cfdf3f73e3ae9e6c4d45a1e0c254a9a8379eaf3e0d71d605811"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 16:09:02 crc kubenswrapper[4888]: I1006 16:09:02.565549 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://737ca743d7735cfdf3f73e3ae9e6c4d45a1e0c254a9a8379eaf3e0d71d605811" gracePeriod=600 Oct 06 16:09:03 crc kubenswrapper[4888]: I1006 16:09:03.186819 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="737ca743d7735cfdf3f73e3ae9e6c4d45a1e0c254a9a8379eaf3e0d71d605811" exitCode=0 Oct 06 16:09:03 crc kubenswrapper[4888]: I1006 16:09:03.186921 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"737ca743d7735cfdf3f73e3ae9e6c4d45a1e0c254a9a8379eaf3e0d71d605811"} Oct 06 16:09:03 crc kubenswrapper[4888]: I1006 16:09:03.187226 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e"} Oct 06 16:09:03 crc kubenswrapper[4888]: I1006 16:09:03.187253 4888 scope.go:117] "RemoveContainer" containerID="23dbbe6422a9f4d16974c5f83380c85b478322df1a1542cb228e0b5a902bfeee" Oct 06 16:11:02 crc kubenswrapper[4888]: I1006 16:11:02.563834 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:11:02 crc kubenswrapper[4888]: I1006 16:11:02.564309 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:11:32 crc kubenswrapper[4888]: I1006 16:11:32.563408 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:11:32 crc kubenswrapper[4888]: I1006 16:11:32.563950 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.044443 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7lnht"] Oct 06 16:11:33 crc kubenswrapper[4888]: E1006 16:11:33.045109 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerName="registry-server" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.045129 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerName="registry-server" Oct 06 16:11:33 crc kubenswrapper[4888]: E1006 16:11:33.045156 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerName="extract-content" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.045163 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerName="extract-content" Oct 06 16:11:33 crc kubenswrapper[4888]: E1006 16:11:33.045175 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerName="extract-utilities" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.045182 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerName="extract-utilities" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.045366 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd340af-8988-4eab-8492-7ae135b0c14b" containerName="registry-server" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.046855 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.061387 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lnht"] Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.071740 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5krp\" (UniqueName: \"kubernetes.io/projected/52786d0a-965b-4e49-aba5-56edeb74e12c-kube-api-access-q5krp\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.071856 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-catalog-content\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.071882 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-utilities\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.173286 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-catalog-content\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.173332 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-utilities\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.173408 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5krp\" (UniqueName: \"kubernetes.io/projected/52786d0a-965b-4e49-aba5-56edeb74e12c-kube-api-access-q5krp\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.174266 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-catalog-content\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.174483 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-utilities\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.209241 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5krp\" (UniqueName: \"kubernetes.io/projected/52786d0a-965b-4e49-aba5-56edeb74e12c-kube-api-access-q5krp\") pod \"redhat-marketplace-7lnht\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.382491 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:33 crc kubenswrapper[4888]: I1006 16:11:33.795431 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lnht"] Oct 06 16:11:33 crc kubenswrapper[4888]: W1006 16:11:33.944196 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52786d0a_965b_4e49_aba5_56edeb74e12c.slice/crio-fd67c42df9581244fd756713658f3c7f7fe4f3638434385adf717dbbcb2d1377 WatchSource:0}: Error finding container fd67c42df9581244fd756713658f3c7f7fe4f3638434385adf717dbbcb2d1377: Status 404 returned error can't find the container with id fd67c42df9581244fd756713658f3c7f7fe4f3638434385adf717dbbcb2d1377 Oct 06 16:11:34 crc kubenswrapper[4888]: I1006 16:11:34.439967 4888 generic.go:334] "Generic (PLEG): container finished" podID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerID="972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d" exitCode=0 Oct 06 16:11:34 crc kubenswrapper[4888]: I1006 16:11:34.440212 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lnht" event={"ID":"52786d0a-965b-4e49-aba5-56edeb74e12c","Type":"ContainerDied","Data":"972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d"} Oct 06 16:11:34 crc kubenswrapper[4888]: I1006 16:11:34.440316 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lnht" event={"ID":"52786d0a-965b-4e49-aba5-56edeb74e12c","Type":"ContainerStarted","Data":"fd67c42df9581244fd756713658f3c7f7fe4f3638434385adf717dbbcb2d1377"} Oct 06 16:11:34 crc kubenswrapper[4888]: I1006 16:11:34.443036 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 16:11:36 crc kubenswrapper[4888]: I1006 16:11:36.455751 4888 generic.go:334] "Generic (PLEG): container finished" podID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerID="46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd" exitCode=0 Oct 06 16:11:36 crc kubenswrapper[4888]: I1006 16:11:36.455826 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lnht" event={"ID":"52786d0a-965b-4e49-aba5-56edeb74e12c","Type":"ContainerDied","Data":"46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd"} Oct 06 16:11:37 crc kubenswrapper[4888]: I1006 16:11:37.467691 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lnht" event={"ID":"52786d0a-965b-4e49-aba5-56edeb74e12c","Type":"ContainerStarted","Data":"bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598"} Oct 06 16:11:37 crc kubenswrapper[4888]: I1006 16:11:37.508780 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7lnht" podStartSLOduration=1.953387092 podStartE2EDuration="4.50876207s" podCreationTimestamp="2025-10-06 16:11:33 +0000 UTC" firstStartedPulling="2025-10-06 16:11:34.442052322 +0000 UTC m=+4234.254403080" lastFinishedPulling="2025-10-06 16:11:36.99742734 +0000 UTC m=+4236.809778058" observedRunningTime="2025-10-06 16:11:37.487984175 +0000 UTC m=+4237.300334893" watchObservedRunningTime="2025-10-06 16:11:37.50876207 +0000 UTC m=+4237.321112838" Oct 06 16:11:43 crc kubenswrapper[4888]: I1006 16:11:43.383091 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:43 crc kubenswrapper[4888]: I1006 16:11:43.383595 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:43 crc kubenswrapper[4888]: I1006 16:11:43.428006 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:43 crc kubenswrapper[4888]: I1006 16:11:43.563641 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:43 crc kubenswrapper[4888]: I1006 16:11:43.664680 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lnht"] Oct 06 16:11:45 crc kubenswrapper[4888]: I1006 16:11:45.533579 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7lnht" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerName="registry-server" containerID="cri-o://bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598" gracePeriod=2 Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.233643 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.324424 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-utilities\") pod \"52786d0a-965b-4e49-aba5-56edeb74e12c\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.324500 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5krp\" (UniqueName: \"kubernetes.io/projected/52786d0a-965b-4e49-aba5-56edeb74e12c-kube-api-access-q5krp\") pod \"52786d0a-965b-4e49-aba5-56edeb74e12c\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.325149 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-utilities" (OuterVolumeSpecName: "utilities") pod "52786d0a-965b-4e49-aba5-56edeb74e12c" (UID: "52786d0a-965b-4e49-aba5-56edeb74e12c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.325452 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-catalog-content\") pod \"52786d0a-965b-4e49-aba5-56edeb74e12c\" (UID: \"52786d0a-965b-4e49-aba5-56edeb74e12c\") " Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.326010 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.338636 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52786d0a-965b-4e49-aba5-56edeb74e12c-kube-api-access-q5krp" (OuterVolumeSpecName: "kube-api-access-q5krp") pod "52786d0a-965b-4e49-aba5-56edeb74e12c" (UID: "52786d0a-965b-4e49-aba5-56edeb74e12c"). InnerVolumeSpecName "kube-api-access-q5krp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.339008 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52786d0a-965b-4e49-aba5-56edeb74e12c" (UID: "52786d0a-965b-4e49-aba5-56edeb74e12c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.427645 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52786d0a-965b-4e49-aba5-56edeb74e12c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.427684 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5krp\" (UniqueName: \"kubernetes.io/projected/52786d0a-965b-4e49-aba5-56edeb74e12c-kube-api-access-q5krp\") on node \"crc\" DevicePath \"\"" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.543485 4888 generic.go:334] "Generic (PLEG): container finished" podID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerID="bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598" exitCode=0 Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.543542 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lnht" event={"ID":"52786d0a-965b-4e49-aba5-56edeb74e12c","Type":"ContainerDied","Data":"bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598"} Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.543558 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7lnht" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.543580 4888 scope.go:117] "RemoveContainer" containerID="bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.543567 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7lnht" event={"ID":"52786d0a-965b-4e49-aba5-56edeb74e12c","Type":"ContainerDied","Data":"fd67c42df9581244fd756713658f3c7f7fe4f3638434385adf717dbbcb2d1377"} Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.570690 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lnht"] Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.572277 4888 scope.go:117] "RemoveContainer" containerID="46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.579449 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7lnht"] Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.589321 4888 scope.go:117] "RemoveContainer" containerID="972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.632312 4888 scope.go:117] "RemoveContainer" containerID="bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598" Oct 06 16:11:46 crc kubenswrapper[4888]: E1006 16:11:46.632872 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598\": container with ID starting with bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598 not found: ID does not exist" containerID="bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.632914 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598"} err="failed to get container status \"bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598\": rpc error: code = NotFound desc = could not find container \"bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598\": container with ID starting with bda1bc86326714f0ffc5fb1512821e5c533c833e8c765b5a3b3139362b00a598 not found: ID does not exist" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.632942 4888 scope.go:117] "RemoveContainer" containerID="46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd" Oct 06 16:11:46 crc kubenswrapper[4888]: E1006 16:11:46.633269 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd\": container with ID starting with 46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd not found: ID does not exist" containerID="46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.633302 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd"} err="failed to get container status \"46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd\": rpc error: code = NotFound desc = could not find container \"46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd\": container with ID starting with 46abc4a50194a81fde2112fa1811cda32dc0e1f6f4cbcde90af57f5b9ee07fbd not found: ID does not exist" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.633315 4888 scope.go:117] "RemoveContainer" containerID="972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d" Oct 06 16:11:46 crc kubenswrapper[4888]: E1006 16:11:46.634730 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d\": container with ID starting with 972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d not found: ID does not exist" containerID="972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.634873 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d"} err="failed to get container status \"972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d\": rpc error: code = NotFound desc = could not find container \"972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d\": container with ID starting with 972f3df6fd7eedc34d3e1e740f83576c245065a1c03d5b316cb69f3e07d2959d not found: ID does not exist" Oct 06 16:11:46 crc kubenswrapper[4888]: I1006 16:11:46.931366 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" path="/var/lib/kubelet/pods/52786d0a-965b-4e49-aba5-56edeb74e12c/volumes" Oct 06 16:12:02 crc kubenswrapper[4888]: I1006 16:12:02.563507 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:12:02 crc kubenswrapper[4888]: I1006 16:12:02.564077 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:12:02 crc kubenswrapper[4888]: I1006 16:12:02.564127 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 16:12:02 crc kubenswrapper[4888]: I1006 16:12:02.564860 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 16:12:02 crc kubenswrapper[4888]: I1006 16:12:02.564918 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" gracePeriod=600 Oct 06 16:12:02 crc kubenswrapper[4888]: E1006 16:12:02.700877 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:12:03 crc kubenswrapper[4888]: I1006 16:12:03.695649 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" exitCode=0 Oct 06 16:12:03 crc kubenswrapper[4888]: I1006 16:12:03.695749 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e"} Oct 06 16:12:03 crc kubenswrapper[4888]: I1006 16:12:03.696119 4888 scope.go:117] "RemoveContainer" containerID="737ca743d7735cfdf3f73e3ae9e6c4d45a1e0c254a9a8379eaf3e0d71d605811" Oct 06 16:12:03 crc kubenswrapper[4888]: I1006 16:12:03.696907 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:12:03 crc kubenswrapper[4888]: E1006 16:12:03.697389 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:12:15 crc kubenswrapper[4888]: I1006 16:12:15.936180 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:12:15 crc kubenswrapper[4888]: E1006 16:12:15.938109 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:12:30 crc kubenswrapper[4888]: I1006 16:12:30.936490 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:12:30 crc kubenswrapper[4888]: E1006 16:12:30.939055 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:12:46 crc kubenswrapper[4888]: I1006 16:12:46.921272 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:12:46 crc kubenswrapper[4888]: E1006 16:12:46.922158 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:12:58 crc kubenswrapper[4888]: I1006 16:12:58.922337 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:12:58 crc kubenswrapper[4888]: E1006 16:12:58.923503 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:13:11 crc kubenswrapper[4888]: I1006 16:13:11.921498 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:13:11 crc kubenswrapper[4888]: E1006 16:13:11.922337 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:13:23 crc kubenswrapper[4888]: I1006 16:13:23.920940 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:13:23 crc kubenswrapper[4888]: E1006 16:13:23.921686 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:13:36 crc kubenswrapper[4888]: I1006 16:13:36.921868 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:13:36 crc kubenswrapper[4888]: E1006 16:13:36.922619 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:13:48 crc kubenswrapper[4888]: I1006 16:13:48.925603 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:13:48 crc kubenswrapper[4888]: E1006 16:13:48.926572 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:14:01 crc kubenswrapper[4888]: I1006 16:14:01.921512 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:14:01 crc kubenswrapper[4888]: E1006 16:14:01.922681 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.509234 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zhts8"] Oct 06 16:14:03 crc kubenswrapper[4888]: E1006 16:14:03.511634 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerName="extract-utilities" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.511817 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerName="extract-utilities" Oct 06 16:14:03 crc kubenswrapper[4888]: E1006 16:14:03.511936 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerName="extract-content" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.512018 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerName="extract-content" Oct 06 16:14:03 crc kubenswrapper[4888]: E1006 16:14:03.513041 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerName="registry-server" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.513162 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerName="registry-server" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.513573 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="52786d0a-965b-4e49-aba5-56edeb74e12c" containerName="registry-server" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.515549 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.542687 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhts8"] Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.659424 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-catalog-content\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.659480 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwqqb\" (UniqueName: \"kubernetes.io/projected/1e69323c-7a04-4a16-bdba-09c6ea2b951f-kube-api-access-dwqqb\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.659613 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-utilities\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.761004 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-utilities\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.761461 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-catalog-content\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.761583 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwqqb\" (UniqueName: \"kubernetes.io/projected/1e69323c-7a04-4a16-bdba-09c6ea2b951f-kube-api-access-dwqqb\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.761534 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-utilities\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.761863 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-catalog-content\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.784575 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwqqb\" (UniqueName: \"kubernetes.io/projected/1e69323c-7a04-4a16-bdba-09c6ea2b951f-kube-api-access-dwqqb\") pod \"redhat-operators-zhts8\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:03 crc kubenswrapper[4888]: I1006 16:14:03.842706 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:04 crc kubenswrapper[4888]: I1006 16:14:04.344186 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhts8"] Oct 06 16:14:04 crc kubenswrapper[4888]: I1006 16:14:04.650034 4888 generic.go:334] "Generic (PLEG): container finished" podID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerID="0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7" exitCode=0 Oct 06 16:14:04 crc kubenswrapper[4888]: I1006 16:14:04.650182 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhts8" event={"ID":"1e69323c-7a04-4a16-bdba-09c6ea2b951f","Type":"ContainerDied","Data":"0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7"} Oct 06 16:14:04 crc kubenswrapper[4888]: I1006 16:14:04.650352 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhts8" event={"ID":"1e69323c-7a04-4a16-bdba-09c6ea2b951f","Type":"ContainerStarted","Data":"b6ba667f9d78043840d54a76a45cbd593b71cfa8086b140458011b9a091d1eea"} Oct 06 16:14:06 crc kubenswrapper[4888]: I1006 16:14:06.675596 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhts8" event={"ID":"1e69323c-7a04-4a16-bdba-09c6ea2b951f","Type":"ContainerStarted","Data":"bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2"} Oct 06 16:14:09 crc kubenswrapper[4888]: I1006 16:14:09.702655 4888 generic.go:334] "Generic (PLEG): container finished" podID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerID="bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2" exitCode=0 Oct 06 16:14:09 crc kubenswrapper[4888]: I1006 16:14:09.702739 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhts8" event={"ID":"1e69323c-7a04-4a16-bdba-09c6ea2b951f","Type":"ContainerDied","Data":"bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2"} Oct 06 16:14:10 crc kubenswrapper[4888]: I1006 16:14:10.713822 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhts8" event={"ID":"1e69323c-7a04-4a16-bdba-09c6ea2b951f","Type":"ContainerStarted","Data":"6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0"} Oct 06 16:14:10 crc kubenswrapper[4888]: I1006 16:14:10.738810 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zhts8" podStartSLOduration=2.197180618 podStartE2EDuration="7.738765478s" podCreationTimestamp="2025-10-06 16:14:03 +0000 UTC" firstStartedPulling="2025-10-06 16:14:04.651492416 +0000 UTC m=+4384.463843134" lastFinishedPulling="2025-10-06 16:14:10.193077276 +0000 UTC m=+4390.005427994" observedRunningTime="2025-10-06 16:14:10.730045191 +0000 UTC m=+4390.542395919" watchObservedRunningTime="2025-10-06 16:14:10.738765478 +0000 UTC m=+4390.551116196" Oct 06 16:14:13 crc kubenswrapper[4888]: I1006 16:14:13.843512 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:13 crc kubenswrapper[4888]: I1006 16:14:13.844058 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:14 crc kubenswrapper[4888]: I1006 16:14:14.890441 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zhts8" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="registry-server" probeResult="failure" output=< Oct 06 16:14:14 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 16:14:14 crc kubenswrapper[4888]: > Oct 06 16:14:14 crc kubenswrapper[4888]: I1006 16:14:14.925385 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:14:14 crc kubenswrapper[4888]: E1006 16:14:14.925898 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:14:23 crc kubenswrapper[4888]: I1006 16:14:23.899267 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:23 crc kubenswrapper[4888]: I1006 16:14:23.952915 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:24 crc kubenswrapper[4888]: I1006 16:14:24.149942 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhts8"] Oct 06 16:14:25 crc kubenswrapper[4888]: I1006 16:14:25.838708 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zhts8" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="registry-server" containerID="cri-o://6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0" gracePeriod=2 Oct 06 16:14:25 crc kubenswrapper[4888]: I1006 16:14:25.921691 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:14:25 crc kubenswrapper[4888]: E1006 16:14:25.921980 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.318257 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.389501 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-catalog-content\") pod \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.389623 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-utilities\") pod \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.389793 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwqqb\" (UniqueName: \"kubernetes.io/projected/1e69323c-7a04-4a16-bdba-09c6ea2b951f-kube-api-access-dwqqb\") pod \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\" (UID: \"1e69323c-7a04-4a16-bdba-09c6ea2b951f\") " Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.392857 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-utilities" (OuterVolumeSpecName: "utilities") pod "1e69323c-7a04-4a16-bdba-09c6ea2b951f" (UID: "1e69323c-7a04-4a16-bdba-09c6ea2b951f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.398782 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e69323c-7a04-4a16-bdba-09c6ea2b951f-kube-api-access-dwqqb" (OuterVolumeSpecName: "kube-api-access-dwqqb") pod "1e69323c-7a04-4a16-bdba-09c6ea2b951f" (UID: "1e69323c-7a04-4a16-bdba-09c6ea2b951f"). InnerVolumeSpecName "kube-api-access-dwqqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.493390 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.493437 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwqqb\" (UniqueName: \"kubernetes.io/projected/1e69323c-7a04-4a16-bdba-09c6ea2b951f-kube-api-access-dwqqb\") on node \"crc\" DevicePath \"\"" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.516645 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e69323c-7a04-4a16-bdba-09c6ea2b951f" (UID: "1e69323c-7a04-4a16-bdba-09c6ea2b951f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.595848 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e69323c-7a04-4a16-bdba-09c6ea2b951f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.850944 4888 generic.go:334] "Generic (PLEG): container finished" podID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerID="6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0" exitCode=0 Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.851033 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhts8" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.851056 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhts8" event={"ID":"1e69323c-7a04-4a16-bdba-09c6ea2b951f","Type":"ContainerDied","Data":"6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0"} Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.851406 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhts8" event={"ID":"1e69323c-7a04-4a16-bdba-09c6ea2b951f","Type":"ContainerDied","Data":"b6ba667f9d78043840d54a76a45cbd593b71cfa8086b140458011b9a091d1eea"} Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.851451 4888 scope.go:117] "RemoveContainer" containerID="6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.886206 4888 scope.go:117] "RemoveContainer" containerID="bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.887487 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhts8"] Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.903574 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zhts8"] Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.909940 4888 scope.go:117] "RemoveContainer" containerID="0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.937669 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" path="/var/lib/kubelet/pods/1e69323c-7a04-4a16-bdba-09c6ea2b951f/volumes" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.955474 4888 scope.go:117] "RemoveContainer" containerID="6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0" Oct 06 16:14:26 crc kubenswrapper[4888]: E1006 16:14:26.956237 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0\": container with ID starting with 6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0 not found: ID does not exist" containerID="6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.956372 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0"} err="failed to get container status \"6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0\": rpc error: code = NotFound desc = could not find container \"6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0\": container with ID starting with 6731d948694fd6c1c6e8b56ef3d8eba1979bd957cd25af2c0e1246deb3191aa0 not found: ID does not exist" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.956465 4888 scope.go:117] "RemoveContainer" containerID="bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2" Oct 06 16:14:26 crc kubenswrapper[4888]: E1006 16:14:26.957177 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2\": container with ID starting with bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2 not found: ID does not exist" containerID="bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.957246 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2"} err="failed to get container status \"bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2\": rpc error: code = NotFound desc = could not find container \"bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2\": container with ID starting with bae47fa18e7d7b056c64c92807d7b7024a8ba4b69d3c0442a81a66d03b957bd2 not found: ID does not exist" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.957300 4888 scope.go:117] "RemoveContainer" containerID="0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7" Oct 06 16:14:26 crc kubenswrapper[4888]: E1006 16:14:26.957591 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7\": container with ID starting with 0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7 not found: ID does not exist" containerID="0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7" Oct 06 16:14:26 crc kubenswrapper[4888]: I1006 16:14:26.957618 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7"} err="failed to get container status \"0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7\": rpc error: code = NotFound desc = could not find container \"0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7\": container with ID starting with 0878de58459db8aab48a4e34ca1c01cf703cbbb4aa58162d5f3117abf77480a7 not found: ID does not exist" Oct 06 16:14:40 crc kubenswrapper[4888]: I1006 16:14:40.929569 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:14:40 crc kubenswrapper[4888]: E1006 16:14:40.930379 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:14:54 crc kubenswrapper[4888]: I1006 16:14:54.921096 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:14:54 crc kubenswrapper[4888]: E1006 16:14:54.922076 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.153635 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c"] Oct 06 16:15:00 crc kubenswrapper[4888]: E1006 16:15:00.154685 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="extract-utilities" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.154703 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="extract-utilities" Oct 06 16:15:00 crc kubenswrapper[4888]: E1006 16:15:00.154741 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="registry-server" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.154750 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="registry-server" Oct 06 16:15:00 crc kubenswrapper[4888]: E1006 16:15:00.154771 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="extract-content" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.154779 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="extract-content" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.155039 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e69323c-7a04-4a16-bdba-09c6ea2b951f" containerName="registry-server" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.155843 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.158470 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.158651 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.164626 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c"] Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.267600 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxfw\" (UniqueName: \"kubernetes.io/projected/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-kube-api-access-fzxfw\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.267680 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-secret-volume\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.267731 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-config-volume\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.369666 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-config-volume\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.369820 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxfw\" (UniqueName: \"kubernetes.io/projected/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-kube-api-access-fzxfw\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.369880 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-secret-volume\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.370741 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-config-volume\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.376161 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-secret-volume\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.392065 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxfw\" (UniqueName: \"kubernetes.io/projected/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-kube-api-access-fzxfw\") pod \"collect-profiles-29329455-8bl7c\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.477219 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:00 crc kubenswrapper[4888]: I1006 16:15:00.960998 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c"] Oct 06 16:15:02 crc kubenswrapper[4888]: I1006 16:15:02.152547 4888 generic.go:334] "Generic (PLEG): container finished" podID="7288ce7b-0611-4fca-b5bb-0fa18b2b5d83" containerID="0459611dbd9409f92078736d37478a7eb7ebb76a43bb186a15f405b2a41d28b0" exitCode=0 Oct 06 16:15:02 crc kubenswrapper[4888]: I1006 16:15:02.152640 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" event={"ID":"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83","Type":"ContainerDied","Data":"0459611dbd9409f92078736d37478a7eb7ebb76a43bb186a15f405b2a41d28b0"} Oct 06 16:15:02 crc kubenswrapper[4888]: I1006 16:15:02.152958 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" event={"ID":"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83","Type":"ContainerStarted","Data":"9de3b01113a87d46013b7ecec4ea670d5178f90e7c055b16cd7d7c4c637b590c"} Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.551650 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.734620 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-config-volume\") pod \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.734754 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzxfw\" (UniqueName: \"kubernetes.io/projected/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-kube-api-access-fzxfw\") pod \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.735133 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-secret-volume\") pod \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\" (UID: \"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83\") " Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.735386 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-config-volume" (OuterVolumeSpecName: "config-volume") pod "7288ce7b-0611-4fca-b5bb-0fa18b2b5d83" (UID: "7288ce7b-0611-4fca-b5bb-0fa18b2b5d83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.735722 4888 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.740907 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-kube-api-access-fzxfw" (OuterVolumeSpecName: "kube-api-access-fzxfw") pod "7288ce7b-0611-4fca-b5bb-0fa18b2b5d83" (UID: "7288ce7b-0611-4fca-b5bb-0fa18b2b5d83"). InnerVolumeSpecName "kube-api-access-fzxfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.746293 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7288ce7b-0611-4fca-b5bb-0fa18b2b5d83" (UID: "7288ce7b-0611-4fca-b5bb-0fa18b2b5d83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.837151 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzxfw\" (UniqueName: \"kubernetes.io/projected/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-kube-api-access-fzxfw\") on node \"crc\" DevicePath \"\"" Oct 06 16:15:03 crc kubenswrapper[4888]: I1006 16:15:03.837199 4888 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7288ce7b-0611-4fca-b5bb-0fa18b2b5d83-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 16:15:04 crc kubenswrapper[4888]: I1006 16:15:04.180549 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" event={"ID":"7288ce7b-0611-4fca-b5bb-0fa18b2b5d83","Type":"ContainerDied","Data":"9de3b01113a87d46013b7ecec4ea670d5178f90e7c055b16cd7d7c4c637b590c"} Oct 06 16:15:04 crc kubenswrapper[4888]: I1006 16:15:04.180818 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9de3b01113a87d46013b7ecec4ea670d5178f90e7c055b16cd7d7c4c637b590c" Oct 06 16:15:04 crc kubenswrapper[4888]: I1006 16:15:04.180625 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329455-8bl7c" Oct 06 16:15:04 crc kubenswrapper[4888]: I1006 16:15:04.625020 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh"] Oct 06 16:15:04 crc kubenswrapper[4888]: I1006 16:15:04.632822 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329410-95mjh"] Oct 06 16:15:04 crc kubenswrapper[4888]: I1006 16:15:04.932854 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6751bf31-8e5c-471f-bdbb-1ddd06bcf233" path="/var/lib/kubelet/pods/6751bf31-8e5c-471f-bdbb-1ddd06bcf233/volumes" Oct 06 16:15:08 crc kubenswrapper[4888]: I1006 16:15:08.924520 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:15:08 crc kubenswrapper[4888]: E1006 16:15:08.925266 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:15:13 crc kubenswrapper[4888]: I1006 16:15:13.890145 4888 scope.go:117] "RemoveContainer" containerID="8b6f4bd15e238a68652f75eb7278c12dd92f400fe95bc8ac235ed8141beab749" Oct 06 16:15:21 crc kubenswrapper[4888]: I1006 16:15:21.921972 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:15:21 crc kubenswrapper[4888]: E1006 16:15:21.923055 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:15:36 crc kubenswrapper[4888]: I1006 16:15:36.925975 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:15:36 crc kubenswrapper[4888]: E1006 16:15:36.926995 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:15:50 crc kubenswrapper[4888]: I1006 16:15:50.927406 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:15:50 crc kubenswrapper[4888]: E1006 16:15:50.928302 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:16:01 crc kubenswrapper[4888]: I1006 16:16:01.922019 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:16:01 crc kubenswrapper[4888]: E1006 16:16:01.923265 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:16:15 crc kubenswrapper[4888]: I1006 16:16:15.923619 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:16:15 crc kubenswrapper[4888]: E1006 16:16:15.927095 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:16:26 crc kubenswrapper[4888]: I1006 16:16:26.921919 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:16:26 crc kubenswrapper[4888]: E1006 16:16:26.922948 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:16:39 crc kubenswrapper[4888]: I1006 16:16:39.921550 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:16:39 crc kubenswrapper[4888]: E1006 16:16:39.922609 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:16:53 crc kubenswrapper[4888]: I1006 16:16:53.921930 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:16:53 crc kubenswrapper[4888]: E1006 16:16:53.923086 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:17:08 crc kubenswrapper[4888]: I1006 16:17:08.921361 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:17:09 crc kubenswrapper[4888]: I1006 16:17:09.335726 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"7d3611487d0528d0af0215a407358f0276ad7ed9d5fdb668d5150c072db9269d"} Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.061830 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xhp89"] Oct 06 16:18:34 crc kubenswrapper[4888]: E1006 16:18:34.063430 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7288ce7b-0611-4fca-b5bb-0fa18b2b5d83" containerName="collect-profiles" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.063455 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="7288ce7b-0611-4fca-b5bb-0fa18b2b5d83" containerName="collect-profiles" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.063747 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="7288ce7b-0611-4fca-b5bb-0fa18b2b5d83" containerName="collect-profiles" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.066757 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.102969 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xhp89"] Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.175191 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk8sz\" (UniqueName: \"kubernetes.io/projected/4e1d636e-ba60-428d-aaf2-519a3be40c46-kube-api-access-jk8sz\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.175435 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-utilities\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.175491 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-catalog-content\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.278257 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-utilities\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.278323 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-catalog-content\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.278432 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk8sz\" (UniqueName: \"kubernetes.io/projected/4e1d636e-ba60-428d-aaf2-519a3be40c46-kube-api-access-jk8sz\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.279263 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-utilities\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.279555 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-catalog-content\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.310488 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk8sz\" (UniqueName: \"kubernetes.io/projected/4e1d636e-ba60-428d-aaf2-519a3be40c46-kube-api-access-jk8sz\") pod \"community-operators-xhp89\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.405978 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:34 crc kubenswrapper[4888]: I1006 16:18:34.974400 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xhp89"] Oct 06 16:18:35 crc kubenswrapper[4888]: I1006 16:18:35.234137 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp89" event={"ID":"4e1d636e-ba60-428d-aaf2-519a3be40c46","Type":"ContainerStarted","Data":"36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3"} Oct 06 16:18:35 crc kubenswrapper[4888]: I1006 16:18:35.234382 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp89" event={"ID":"4e1d636e-ba60-428d-aaf2-519a3be40c46","Type":"ContainerStarted","Data":"09779be0bfefc2787a27a0d9f1651824507010ce465e6ca1f3adf127be3c10ef"} Oct 06 16:18:36 crc kubenswrapper[4888]: I1006 16:18:36.250633 4888 generic.go:334] "Generic (PLEG): container finished" podID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerID="36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3" exitCode=0 Oct 06 16:18:36 crc kubenswrapper[4888]: I1006 16:18:36.250773 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp89" event={"ID":"4e1d636e-ba60-428d-aaf2-519a3be40c46","Type":"ContainerDied","Data":"36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3"} Oct 06 16:18:36 crc kubenswrapper[4888]: I1006 16:18:36.254772 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 16:18:38 crc kubenswrapper[4888]: I1006 16:18:38.268823 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp89" event={"ID":"4e1d636e-ba60-428d-aaf2-519a3be40c46","Type":"ContainerStarted","Data":"05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40"} Oct 06 16:18:39 crc kubenswrapper[4888]: I1006 16:18:39.286316 4888 generic.go:334] "Generic (PLEG): container finished" podID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerID="05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40" exitCode=0 Oct 06 16:18:39 crc kubenswrapper[4888]: I1006 16:18:39.286371 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp89" event={"ID":"4e1d636e-ba60-428d-aaf2-519a3be40c46","Type":"ContainerDied","Data":"05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40"} Oct 06 16:18:40 crc kubenswrapper[4888]: I1006 16:18:40.298849 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp89" event={"ID":"4e1d636e-ba60-428d-aaf2-519a3be40c46","Type":"ContainerStarted","Data":"8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8"} Oct 06 16:18:40 crc kubenswrapper[4888]: I1006 16:18:40.317227 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xhp89" podStartSLOduration=2.71346092 podStartE2EDuration="6.317212141s" podCreationTimestamp="2025-10-06 16:18:34 +0000 UTC" firstStartedPulling="2025-10-06 16:18:36.254130871 +0000 UTC m=+4656.066481629" lastFinishedPulling="2025-10-06 16:18:39.857882082 +0000 UTC m=+4659.670232850" observedRunningTime="2025-10-06 16:18:40.315112675 +0000 UTC m=+4660.127463393" watchObservedRunningTime="2025-10-06 16:18:40.317212141 +0000 UTC m=+4660.129562869" Oct 06 16:18:44 crc kubenswrapper[4888]: I1006 16:18:44.406094 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:44 crc kubenswrapper[4888]: I1006 16:18:44.406886 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:44 crc kubenswrapper[4888]: I1006 16:18:44.507582 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:45 crc kubenswrapper[4888]: I1006 16:18:45.433435 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:45 crc kubenswrapper[4888]: I1006 16:18:45.513799 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xhp89"] Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.365161 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xhp89" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerName="registry-server" containerID="cri-o://8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8" gracePeriod=2 Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.861443 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.880854 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-catalog-content\") pod \"4e1d636e-ba60-428d-aaf2-519a3be40c46\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.881021 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk8sz\" (UniqueName: \"kubernetes.io/projected/4e1d636e-ba60-428d-aaf2-519a3be40c46-kube-api-access-jk8sz\") pod \"4e1d636e-ba60-428d-aaf2-519a3be40c46\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.881102 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-utilities\") pod \"4e1d636e-ba60-428d-aaf2-519a3be40c46\" (UID: \"4e1d636e-ba60-428d-aaf2-519a3be40c46\") " Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.884043 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-utilities" (OuterVolumeSpecName: "utilities") pod "4e1d636e-ba60-428d-aaf2-519a3be40c46" (UID: "4e1d636e-ba60-428d-aaf2-519a3be40c46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.889517 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1d636e-ba60-428d-aaf2-519a3be40c46-kube-api-access-jk8sz" (OuterVolumeSpecName: "kube-api-access-jk8sz") pod "4e1d636e-ba60-428d-aaf2-519a3be40c46" (UID: "4e1d636e-ba60-428d-aaf2-519a3be40c46"). InnerVolumeSpecName "kube-api-access-jk8sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.955416 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e1d636e-ba60-428d-aaf2-519a3be40c46" (UID: "4e1d636e-ba60-428d-aaf2-519a3be40c46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.985699 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk8sz\" (UniqueName: \"kubernetes.io/projected/4e1d636e-ba60-428d-aaf2-519a3be40c46-kube-api-access-jk8sz\") on node \"crc\" DevicePath \"\"" Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.985729 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:18:47 crc kubenswrapper[4888]: I1006 16:18:47.985741 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1d636e-ba60-428d-aaf2-519a3be40c46-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.384659 4888 generic.go:334] "Generic (PLEG): container finished" podID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerID="8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8" exitCode=0 Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.384767 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp89" event={"ID":"4e1d636e-ba60-428d-aaf2-519a3be40c46","Type":"ContainerDied","Data":"8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8"} Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.385033 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp89" event={"ID":"4e1d636e-ba60-428d-aaf2-519a3be40c46","Type":"ContainerDied","Data":"09779be0bfefc2787a27a0d9f1651824507010ce465e6ca1f3adf127be3c10ef"} Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.385054 4888 scope.go:117] "RemoveContainer" containerID="8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.385197 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp89" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.418215 4888 scope.go:117] "RemoveContainer" containerID="05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.426027 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xhp89"] Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.452002 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xhp89"] Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.452697 4888 scope.go:117] "RemoveContainer" containerID="36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.492460 4888 scope.go:117] "RemoveContainer" containerID="8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8" Oct 06 16:18:48 crc kubenswrapper[4888]: E1006 16:18:48.492829 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8\": container with ID starting with 8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8 not found: ID does not exist" containerID="8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.492860 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8"} err="failed to get container status \"8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8\": rpc error: code = NotFound desc = could not find container \"8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8\": container with ID starting with 8e4fb693173cfe98125dc3838e3459f68b1709ca3afcf8fd32396d818f7eb2e8 not found: ID does not exist" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.492895 4888 scope.go:117] "RemoveContainer" containerID="05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40" Oct 06 16:18:48 crc kubenswrapper[4888]: E1006 16:18:48.493219 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40\": container with ID starting with 05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40 not found: ID does not exist" containerID="05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.493253 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40"} err="failed to get container status \"05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40\": rpc error: code = NotFound desc = could not find container \"05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40\": container with ID starting with 05eb8b380b38154235b6c2022c4df7a5862a2c8ac3fc22e9b1dda517897f8a40 not found: ID does not exist" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.493270 4888 scope.go:117] "RemoveContainer" containerID="36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3" Oct 06 16:18:48 crc kubenswrapper[4888]: E1006 16:18:48.493653 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3\": container with ID starting with 36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3 not found: ID does not exist" containerID="36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.493685 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3"} err="failed to get container status \"36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3\": rpc error: code = NotFound desc = could not find container \"36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3\": container with ID starting with 36d9c16429709536565c03b85c1d0208f0a30c91c408080438f8aa509f4514b3 not found: ID does not exist" Oct 06 16:18:48 crc kubenswrapper[4888]: I1006 16:18:48.933239 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" path="/var/lib/kubelet/pods/4e1d636e-ba60-428d-aaf2-519a3be40c46/volumes" Oct 06 16:19:32 crc kubenswrapper[4888]: I1006 16:19:32.563440 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:19:32 crc kubenswrapper[4888]: I1006 16:19:32.563996 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:20:02 crc kubenswrapper[4888]: I1006 16:20:02.563345 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:20:02 crc kubenswrapper[4888]: I1006 16:20:02.563911 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:20:05 crc kubenswrapper[4888]: I1006 16:20:05.954133 4888 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7dc98fc94f-nlvnj" podUID="39bde926-1f59-45eb-8f71-841d380f9c5d" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 06 16:20:32 crc kubenswrapper[4888]: I1006 16:20:32.563777 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:20:32 crc kubenswrapper[4888]: I1006 16:20:32.564487 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:20:32 crc kubenswrapper[4888]: I1006 16:20:32.564541 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 16:20:32 crc kubenswrapper[4888]: I1006 16:20:32.565670 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d3611487d0528d0af0215a407358f0276ad7ed9d5fdb668d5150c072db9269d"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 16:20:32 crc kubenswrapper[4888]: I1006 16:20:32.565741 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://7d3611487d0528d0af0215a407358f0276ad7ed9d5fdb668d5150c072db9269d" gracePeriod=600 Oct 06 16:20:33 crc kubenswrapper[4888]: I1006 16:20:33.449383 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="7d3611487d0528d0af0215a407358f0276ad7ed9d5fdb668d5150c072db9269d" exitCode=0 Oct 06 16:20:33 crc kubenswrapper[4888]: I1006 16:20:33.449783 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"7d3611487d0528d0af0215a407358f0276ad7ed9d5fdb668d5150c072db9269d"} Oct 06 16:20:33 crc kubenswrapper[4888]: I1006 16:20:33.449956 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c"} Oct 06 16:20:33 crc kubenswrapper[4888]: I1006 16:20:33.449981 4888 scope.go:117] "RemoveContainer" containerID="e31257ddd14d95c30a7b64fee3951a177654794e3911074954c69602f818847e" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.452026 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zzfqw"] Oct 06 16:21:48 crc kubenswrapper[4888]: E1006 16:21:48.453066 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerName="extract-content" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.453083 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerName="extract-content" Oct 06 16:21:48 crc kubenswrapper[4888]: E1006 16:21:48.453117 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerName="extract-utilities" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.453127 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerName="extract-utilities" Oct 06 16:21:48 crc kubenswrapper[4888]: E1006 16:21:48.453155 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerName="registry-server" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.453164 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerName="registry-server" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.453407 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1d636e-ba60-428d-aaf2-519a3be40c46" containerName="registry-server" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.455134 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.478329 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzfqw"] Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.575723 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzs4p\" (UniqueName: \"kubernetes.io/projected/1c530058-a294-4457-9ead-ee20a5d72a88-kube-api-access-gzs4p\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.575773 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-catalog-content\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.575805 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-utilities\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.677712 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzs4p\" (UniqueName: \"kubernetes.io/projected/1c530058-a294-4457-9ead-ee20a5d72a88-kube-api-access-gzs4p\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.678031 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-catalog-content\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.678145 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-utilities\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.678580 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-catalog-content\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.678610 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-utilities\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.708814 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzs4p\" (UniqueName: \"kubernetes.io/projected/1c530058-a294-4457-9ead-ee20a5d72a88-kube-api-access-gzs4p\") pod \"redhat-marketplace-zzfqw\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:48 crc kubenswrapper[4888]: I1006 16:21:48.790421 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.325582 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzfqw"] Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.444165 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gdqnz"] Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.445900 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.459587 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gdqnz"] Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.599302 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-utilities\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.599414 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ql4j\" (UniqueName: \"kubernetes.io/projected/652f4236-2811-4ee6-9738-fb29cc64967f-kube-api-access-6ql4j\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.599727 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-catalog-content\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.701082 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-catalog-content\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.701250 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-utilities\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.701329 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ql4j\" (UniqueName: \"kubernetes.io/projected/652f4236-2811-4ee6-9738-fb29cc64967f-kube-api-access-6ql4j\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.701982 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-utilities\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.702193 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-catalog-content\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.723302 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ql4j\" (UniqueName: \"kubernetes.io/projected/652f4236-2811-4ee6-9738-fb29cc64967f-kube-api-access-6ql4j\") pod \"certified-operators-gdqnz\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:49 crc kubenswrapper[4888]: I1006 16:21:49.760593 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:50 crc kubenswrapper[4888]: I1006 16:21:50.189165 4888 generic.go:334] "Generic (PLEG): container finished" podID="1c530058-a294-4457-9ead-ee20a5d72a88" containerID="9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9" exitCode=0 Oct 06 16:21:50 crc kubenswrapper[4888]: I1006 16:21:50.189229 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzfqw" event={"ID":"1c530058-a294-4457-9ead-ee20a5d72a88","Type":"ContainerDied","Data":"9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9"} Oct 06 16:21:50 crc kubenswrapper[4888]: I1006 16:21:50.189253 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzfqw" event={"ID":"1c530058-a294-4457-9ead-ee20a5d72a88","Type":"ContainerStarted","Data":"0167e9ba46a37586164b05317817b48e2d08fae50e450f36c6172daa49e3774e"} Oct 06 16:21:50 crc kubenswrapper[4888]: I1006 16:21:50.276552 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gdqnz"] Oct 06 16:21:50 crc kubenswrapper[4888]: W1006 16:21:50.639115 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652f4236_2811_4ee6_9738_fb29cc64967f.slice/crio-37268fecd1f12d608ea176b364684db3c42088d83e1b7198a5799bace2867d68 WatchSource:0}: Error finding container 37268fecd1f12d608ea176b364684db3c42088d83e1b7198a5799bace2867d68: Status 404 returned error can't find the container with id 37268fecd1f12d608ea176b364684db3c42088d83e1b7198a5799bace2867d68 Oct 06 16:21:51 crc kubenswrapper[4888]: I1006 16:21:51.199394 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzfqw" event={"ID":"1c530058-a294-4457-9ead-ee20a5d72a88","Type":"ContainerStarted","Data":"23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8"} Oct 06 16:21:51 crc kubenswrapper[4888]: I1006 16:21:51.201234 4888 generic.go:334] "Generic (PLEG): container finished" podID="652f4236-2811-4ee6-9738-fb29cc64967f" containerID="3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48" exitCode=0 Oct 06 16:21:51 crc kubenswrapper[4888]: I1006 16:21:51.201260 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gdqnz" event={"ID":"652f4236-2811-4ee6-9738-fb29cc64967f","Type":"ContainerDied","Data":"3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48"} Oct 06 16:21:51 crc kubenswrapper[4888]: I1006 16:21:51.201276 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gdqnz" event={"ID":"652f4236-2811-4ee6-9738-fb29cc64967f","Type":"ContainerStarted","Data":"37268fecd1f12d608ea176b364684db3c42088d83e1b7198a5799bace2867d68"} Oct 06 16:21:52 crc kubenswrapper[4888]: I1006 16:21:52.211991 4888 generic.go:334] "Generic (PLEG): container finished" podID="1c530058-a294-4457-9ead-ee20a5d72a88" containerID="23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8" exitCode=0 Oct 06 16:21:52 crc kubenswrapper[4888]: I1006 16:21:52.212187 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzfqw" event={"ID":"1c530058-a294-4457-9ead-ee20a5d72a88","Type":"ContainerDied","Data":"23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8"} Oct 06 16:21:53 crc kubenswrapper[4888]: I1006 16:21:53.226157 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzfqw" event={"ID":"1c530058-a294-4457-9ead-ee20a5d72a88","Type":"ContainerStarted","Data":"cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494"} Oct 06 16:21:53 crc kubenswrapper[4888]: I1006 16:21:53.228469 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gdqnz" event={"ID":"652f4236-2811-4ee6-9738-fb29cc64967f","Type":"ContainerStarted","Data":"acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7"} Oct 06 16:21:53 crc kubenswrapper[4888]: I1006 16:21:53.248945 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zzfqw" podStartSLOduration=2.652340508 podStartE2EDuration="5.248926184s" podCreationTimestamp="2025-10-06 16:21:48 +0000 UTC" firstStartedPulling="2025-10-06 16:21:50.191132738 +0000 UTC m=+4850.003483456" lastFinishedPulling="2025-10-06 16:21:52.787718414 +0000 UTC m=+4852.600069132" observedRunningTime="2025-10-06 16:21:53.243231674 +0000 UTC m=+4853.055582392" watchObservedRunningTime="2025-10-06 16:21:53.248926184 +0000 UTC m=+4853.061276912" Oct 06 16:21:54 crc kubenswrapper[4888]: I1006 16:21:54.237315 4888 generic.go:334] "Generic (PLEG): container finished" podID="652f4236-2811-4ee6-9738-fb29cc64967f" containerID="acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7" exitCode=0 Oct 06 16:21:54 crc kubenswrapper[4888]: I1006 16:21:54.237349 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gdqnz" event={"ID":"652f4236-2811-4ee6-9738-fb29cc64967f","Type":"ContainerDied","Data":"acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7"} Oct 06 16:21:55 crc kubenswrapper[4888]: I1006 16:21:55.246397 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gdqnz" event={"ID":"652f4236-2811-4ee6-9738-fb29cc64967f","Type":"ContainerStarted","Data":"c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d"} Oct 06 16:21:55 crc kubenswrapper[4888]: I1006 16:21:55.276122 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gdqnz" podStartSLOduration=2.846592442 podStartE2EDuration="6.276105409s" podCreationTimestamp="2025-10-06 16:21:49 +0000 UTC" firstStartedPulling="2025-10-06 16:21:51.202555302 +0000 UTC m=+4851.014906020" lastFinishedPulling="2025-10-06 16:21:54.632068259 +0000 UTC m=+4854.444418987" observedRunningTime="2025-10-06 16:21:55.270980738 +0000 UTC m=+4855.083331456" watchObservedRunningTime="2025-10-06 16:21:55.276105409 +0000 UTC m=+4855.088456117" Oct 06 16:21:58 crc kubenswrapper[4888]: I1006 16:21:58.791474 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:58 crc kubenswrapper[4888]: I1006 16:21:58.792273 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:58 crc kubenswrapper[4888]: I1006 16:21:58.908440 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:59 crc kubenswrapper[4888]: I1006 16:21:59.327140 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:21:59 crc kubenswrapper[4888]: I1006 16:21:59.638456 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzfqw"] Oct 06 16:21:59 crc kubenswrapper[4888]: I1006 16:21:59.761513 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:59 crc kubenswrapper[4888]: I1006 16:21:59.762223 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:21:59 crc kubenswrapper[4888]: I1006 16:21:59.823020 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:22:00 crc kubenswrapper[4888]: I1006 16:22:00.375400 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:22:01 crc kubenswrapper[4888]: I1006 16:22:01.298543 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zzfqw" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" containerName="registry-server" containerID="cri-o://cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494" gracePeriod=2 Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.041600 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gdqnz"] Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.091285 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.276121 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzs4p\" (UniqueName: \"kubernetes.io/projected/1c530058-a294-4457-9ead-ee20a5d72a88-kube-api-access-gzs4p\") pod \"1c530058-a294-4457-9ead-ee20a5d72a88\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.276298 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-utilities\") pod \"1c530058-a294-4457-9ead-ee20a5d72a88\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.276385 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-catalog-content\") pod \"1c530058-a294-4457-9ead-ee20a5d72a88\" (UID: \"1c530058-a294-4457-9ead-ee20a5d72a88\") " Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.278271 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-utilities" (OuterVolumeSpecName: "utilities") pod "1c530058-a294-4457-9ead-ee20a5d72a88" (UID: "1c530058-a294-4457-9ead-ee20a5d72a88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.282093 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c530058-a294-4457-9ead-ee20a5d72a88-kube-api-access-gzs4p" (OuterVolumeSpecName: "kube-api-access-gzs4p") pod "1c530058-a294-4457-9ead-ee20a5d72a88" (UID: "1c530058-a294-4457-9ead-ee20a5d72a88"). InnerVolumeSpecName "kube-api-access-gzs4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.302171 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c530058-a294-4457-9ead-ee20a5d72a88" (UID: "1c530058-a294-4457-9ead-ee20a5d72a88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.309596 4888 generic.go:334] "Generic (PLEG): container finished" podID="1c530058-a294-4457-9ead-ee20a5d72a88" containerID="cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494" exitCode=0 Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.309687 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzfqw" event={"ID":"1c530058-a294-4457-9ead-ee20a5d72a88","Type":"ContainerDied","Data":"cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494"} Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.309774 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zzfqw" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.309873 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zzfqw" event={"ID":"1c530058-a294-4457-9ead-ee20a5d72a88","Type":"ContainerDied","Data":"0167e9ba46a37586164b05317817b48e2d08fae50e450f36c6172daa49e3774e"} Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.309891 4888 scope.go:117] "RemoveContainer" containerID="cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.352953 4888 scope.go:117] "RemoveContainer" containerID="23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.355225 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzfqw"] Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.362143 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zzfqw"] Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.374163 4888 scope.go:117] "RemoveContainer" containerID="9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.381199 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzs4p\" (UniqueName: \"kubernetes.io/projected/1c530058-a294-4457-9ead-ee20a5d72a88-kube-api-access-gzs4p\") on node \"crc\" DevicePath \"\"" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.381238 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.381252 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c530058-a294-4457-9ead-ee20a5d72a88-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.414723 4888 scope.go:117] "RemoveContainer" containerID="cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494" Oct 06 16:22:02 crc kubenswrapper[4888]: E1006 16:22:02.415291 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494\": container with ID starting with cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494 not found: ID does not exist" containerID="cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.415340 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494"} err="failed to get container status \"cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494\": rpc error: code = NotFound desc = could not find container \"cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494\": container with ID starting with cddee72dab30be66ede088f46cc2a8a37535b750668270055753036cc807b494 not found: ID does not exist" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.415420 4888 scope.go:117] "RemoveContainer" containerID="23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8" Oct 06 16:22:02 crc kubenswrapper[4888]: E1006 16:22:02.415766 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8\": container with ID starting with 23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8 not found: ID does not exist" containerID="23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.415830 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8"} err="failed to get container status \"23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8\": rpc error: code = NotFound desc = could not find container \"23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8\": container with ID starting with 23b1c71e35df7808d42955ac3f4f28768d8bd3189c264c670d3a0560ff3899d8 not found: ID does not exist" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.415846 4888 scope.go:117] "RemoveContainer" containerID="9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9" Oct 06 16:22:02 crc kubenswrapper[4888]: E1006 16:22:02.416395 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9\": container with ID starting with 9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9 not found: ID does not exist" containerID="9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.416417 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9"} err="failed to get container status \"9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9\": rpc error: code = NotFound desc = could not find container \"9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9\": container with ID starting with 9078b78a67be8230a00865bc1d2e5702233373e38034c702c68427bebe7635f9 not found: ID does not exist" Oct 06 16:22:02 crc kubenswrapper[4888]: I1006 16:22:02.941137 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" path="/var/lib/kubelet/pods/1c530058-a294-4457-9ead-ee20a5d72a88/volumes" Oct 06 16:22:03 crc kubenswrapper[4888]: I1006 16:22:03.320973 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gdqnz" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" containerName="registry-server" containerID="cri-o://c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d" gracePeriod=2 Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.138687 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.315139 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ql4j\" (UniqueName: \"kubernetes.io/projected/652f4236-2811-4ee6-9738-fb29cc64967f-kube-api-access-6ql4j\") pod \"652f4236-2811-4ee6-9738-fb29cc64967f\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.315326 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-utilities\") pod \"652f4236-2811-4ee6-9738-fb29cc64967f\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.315376 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-catalog-content\") pod \"652f4236-2811-4ee6-9738-fb29cc64967f\" (UID: \"652f4236-2811-4ee6-9738-fb29cc64967f\") " Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.316146 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-utilities" (OuterVolumeSpecName: "utilities") pod "652f4236-2811-4ee6-9738-fb29cc64967f" (UID: "652f4236-2811-4ee6-9738-fb29cc64967f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.322978 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652f4236-2811-4ee6-9738-fb29cc64967f-kube-api-access-6ql4j" (OuterVolumeSpecName: "kube-api-access-6ql4j") pod "652f4236-2811-4ee6-9738-fb29cc64967f" (UID: "652f4236-2811-4ee6-9738-fb29cc64967f"). InnerVolumeSpecName "kube-api-access-6ql4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.332198 4888 generic.go:334] "Generic (PLEG): container finished" podID="652f4236-2811-4ee6-9738-fb29cc64967f" containerID="c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d" exitCode=0 Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.332241 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gdqnz" event={"ID":"652f4236-2811-4ee6-9738-fb29cc64967f","Type":"ContainerDied","Data":"c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d"} Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.332270 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gdqnz" event={"ID":"652f4236-2811-4ee6-9738-fb29cc64967f","Type":"ContainerDied","Data":"37268fecd1f12d608ea176b364684db3c42088d83e1b7198a5799bace2867d68"} Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.332290 4888 scope.go:117] "RemoveContainer" containerID="c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.332410 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gdqnz" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.365868 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "652f4236-2811-4ee6-9738-fb29cc64967f" (UID: "652f4236-2811-4ee6-9738-fb29cc64967f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.376366 4888 scope.go:117] "RemoveContainer" containerID="acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.395393 4888 scope.go:117] "RemoveContainer" containerID="3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.417570 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.417861 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652f4236-2811-4ee6-9738-fb29cc64967f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.417938 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ql4j\" (UniqueName: \"kubernetes.io/projected/652f4236-2811-4ee6-9738-fb29cc64967f-kube-api-access-6ql4j\") on node \"crc\" DevicePath \"\"" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.436815 4888 scope.go:117] "RemoveContainer" containerID="c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d" Oct 06 16:22:04 crc kubenswrapper[4888]: E1006 16:22:04.437323 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d\": container with ID starting with c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d not found: ID does not exist" containerID="c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.437354 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d"} err="failed to get container status \"c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d\": rpc error: code = NotFound desc = could not find container \"c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d\": container with ID starting with c22344abbd4c44acf3cad3298355073b07d6f1c59cf5661a4013737c7491155d not found: ID does not exist" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.437376 4888 scope.go:117] "RemoveContainer" containerID="acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7" Oct 06 16:22:04 crc kubenswrapper[4888]: E1006 16:22:04.437692 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7\": container with ID starting with acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7 not found: ID does not exist" containerID="acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.437718 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7"} err="failed to get container status \"acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7\": rpc error: code = NotFound desc = could not find container \"acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7\": container with ID starting with acc1729552883007666b67877a550331162e6b57e02eadfc742dd0ed9f3e2bc7 not found: ID does not exist" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.437734 4888 scope.go:117] "RemoveContainer" containerID="3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48" Oct 06 16:22:04 crc kubenswrapper[4888]: E1006 16:22:04.437986 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48\": container with ID starting with 3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48 not found: ID does not exist" containerID="3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.438006 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48"} err="failed to get container status \"3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48\": rpc error: code = NotFound desc = could not find container \"3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48\": container with ID starting with 3874f57d1d661890f4674bd7d011c570ac7482171ddbd7bbb59e356e72443d48 not found: ID does not exist" Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.689429 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gdqnz"] Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.704036 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gdqnz"] Oct 06 16:22:04 crc kubenswrapper[4888]: I1006 16:22:04.942582 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" path="/var/lib/kubelet/pods/652f4236-2811-4ee6-9738-fb29cc64967f/volumes" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.811544 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhv7l/must-gather-kz7f8"] Oct 06 16:22:51 crc kubenswrapper[4888]: E1006 16:22:51.812429 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" containerName="extract-content" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.812442 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" containerName="extract-content" Oct 06 16:22:51 crc kubenswrapper[4888]: E1006 16:22:51.812465 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" containerName="extract-utilities" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.812471 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" containerName="extract-utilities" Oct 06 16:22:51 crc kubenswrapper[4888]: E1006 16:22:51.812496 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" containerName="registry-server" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.812503 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" containerName="registry-server" Oct 06 16:22:51 crc kubenswrapper[4888]: E1006 16:22:51.812516 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" containerName="extract-utilities" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.812521 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" containerName="extract-utilities" Oct 06 16:22:51 crc kubenswrapper[4888]: E1006 16:22:51.812534 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" containerName="extract-content" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.812540 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" containerName="extract-content" Oct 06 16:22:51 crc kubenswrapper[4888]: E1006 16:22:51.812551 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" containerName="registry-server" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.812557 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" containerName="registry-server" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.812727 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="652f4236-2811-4ee6-9738-fb29cc64967f" containerName="registry-server" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.812748 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c530058-a294-4457-9ead-ee20a5d72a88" containerName="registry-server" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.813713 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.819982 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qhv7l"/"openshift-service-ca.crt" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.820025 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qhv7l"/"kube-root-ca.crt" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.840716 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qhv7l/must-gather-kz7f8"] Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.896618 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjts\" (UniqueName: \"kubernetes.io/projected/645d487f-92ae-43a8-a9f2-768977b9eea7-kube-api-access-gvjts\") pod \"must-gather-kz7f8\" (UID: \"645d487f-92ae-43a8-a9f2-768977b9eea7\") " pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.896814 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/645d487f-92ae-43a8-a9f2-768977b9eea7-must-gather-output\") pod \"must-gather-kz7f8\" (UID: \"645d487f-92ae-43a8-a9f2-768977b9eea7\") " pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.999220 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/645d487f-92ae-43a8-a9f2-768977b9eea7-must-gather-output\") pod \"must-gather-kz7f8\" (UID: \"645d487f-92ae-43a8-a9f2-768977b9eea7\") " pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:22:51 crc kubenswrapper[4888]: I1006 16:22:51.999360 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvjts\" (UniqueName: \"kubernetes.io/projected/645d487f-92ae-43a8-a9f2-768977b9eea7-kube-api-access-gvjts\") pod \"must-gather-kz7f8\" (UID: \"645d487f-92ae-43a8-a9f2-768977b9eea7\") " pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:22:52 crc kubenswrapper[4888]: I1006 16:22:52.000098 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/645d487f-92ae-43a8-a9f2-768977b9eea7-must-gather-output\") pod \"must-gather-kz7f8\" (UID: \"645d487f-92ae-43a8-a9f2-768977b9eea7\") " pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:22:52 crc kubenswrapper[4888]: I1006 16:22:52.043207 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvjts\" (UniqueName: \"kubernetes.io/projected/645d487f-92ae-43a8-a9f2-768977b9eea7-kube-api-access-gvjts\") pod \"must-gather-kz7f8\" (UID: \"645d487f-92ae-43a8-a9f2-768977b9eea7\") " pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:22:52 crc kubenswrapper[4888]: I1006 16:22:52.134119 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:22:52 crc kubenswrapper[4888]: I1006 16:22:52.660711 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qhv7l/must-gather-kz7f8"] Oct 06 16:22:52 crc kubenswrapper[4888]: I1006 16:22:52.830960 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" event={"ID":"645d487f-92ae-43a8-a9f2-768977b9eea7","Type":"ContainerStarted","Data":"30be48f01d6386a70839920d8dbe51759f422f1bb2850a6128ca5146413610e6"} Oct 06 16:22:59 crc kubenswrapper[4888]: I1006 16:22:59.901364 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" event={"ID":"645d487f-92ae-43a8-a9f2-768977b9eea7","Type":"ContainerStarted","Data":"57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149"} Oct 06 16:22:59 crc kubenswrapper[4888]: I1006 16:22:59.901990 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" event={"ID":"645d487f-92ae-43a8-a9f2-768977b9eea7","Type":"ContainerStarted","Data":"7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886"} Oct 06 16:22:59 crc kubenswrapper[4888]: I1006 16:22:59.932484 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" podStartSLOduration=3.266522643 podStartE2EDuration="8.932465551s" podCreationTimestamp="2025-10-06 16:22:51 +0000 UTC" firstStartedPulling="2025-10-06 16:22:52.636322248 +0000 UTC m=+4912.448672966" lastFinishedPulling="2025-10-06 16:22:58.302265146 +0000 UTC m=+4918.114615874" observedRunningTime="2025-10-06 16:22:59.930147529 +0000 UTC m=+4919.742498247" watchObservedRunningTime="2025-10-06 16:22:59.932465551 +0000 UTC m=+4919.744816259" Oct 06 16:23:02 crc kubenswrapper[4888]: I1006 16:23:02.564112 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:23:02 crc kubenswrapper[4888]: I1006 16:23:02.564761 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.587489 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-5trq8"] Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.589021 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.591491 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qhv7l"/"default-dockercfg-p5wm2" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.658859 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5949d901-cb3f-4c51-93ae-856be2bcfe0e-host\") pod \"crc-debug-5trq8\" (UID: \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\") " pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.658949 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4s8r\" (UniqueName: \"kubernetes.io/projected/5949d901-cb3f-4c51-93ae-856be2bcfe0e-kube-api-access-k4s8r\") pod \"crc-debug-5trq8\" (UID: \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\") " pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.761032 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4s8r\" (UniqueName: \"kubernetes.io/projected/5949d901-cb3f-4c51-93ae-856be2bcfe0e-kube-api-access-k4s8r\") pod \"crc-debug-5trq8\" (UID: \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\") " pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.761455 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5949d901-cb3f-4c51-93ae-856be2bcfe0e-host\") pod \"crc-debug-5trq8\" (UID: \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\") " pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.761560 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5949d901-cb3f-4c51-93ae-856be2bcfe0e-host\") pod \"crc-debug-5trq8\" (UID: \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\") " pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.787514 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4s8r\" (UniqueName: \"kubernetes.io/projected/5949d901-cb3f-4c51-93ae-856be2bcfe0e-kube-api-access-k4s8r\") pod \"crc-debug-5trq8\" (UID: \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\") " pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:23:04 crc kubenswrapper[4888]: I1006 16:23:04.908415 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:23:05 crc kubenswrapper[4888]: I1006 16:23:05.948308 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-5trq8" event={"ID":"5949d901-cb3f-4c51-93ae-856be2bcfe0e","Type":"ContainerStarted","Data":"1d1ce9ff87d1ad55e161f6e850510956651b91710cad8df72803b11c50338e08"} Oct 06 16:23:17 crc kubenswrapper[4888]: I1006 16:23:17.060397 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-5trq8" event={"ID":"5949d901-cb3f-4c51-93ae-856be2bcfe0e","Type":"ContainerStarted","Data":"bb687567ac180cb3371b1768891468427d0bd19b5810e2c71020a6974e92263e"} Oct 06 16:23:17 crc kubenswrapper[4888]: I1006 16:23:17.084783 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qhv7l/crc-debug-5trq8" podStartSLOduration=1.646638244 podStartE2EDuration="13.084766717s" podCreationTimestamp="2025-10-06 16:23:04 +0000 UTC" firstStartedPulling="2025-10-06 16:23:04.942260686 +0000 UTC m=+4924.754611404" lastFinishedPulling="2025-10-06 16:23:16.380389159 +0000 UTC m=+4936.192739877" observedRunningTime="2025-10-06 16:23:17.079128299 +0000 UTC m=+4936.891479067" watchObservedRunningTime="2025-10-06 16:23:17.084766717 +0000 UTC m=+4936.897117435" Oct 06 16:23:32 crc kubenswrapper[4888]: I1006 16:23:32.564691 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:23:32 crc kubenswrapper[4888]: I1006 16:23:32.565459 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:24:02 crc kubenswrapper[4888]: I1006 16:24:02.563728 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:24:02 crc kubenswrapper[4888]: I1006 16:24:02.564347 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 16:24:02 crc kubenswrapper[4888]: I1006 16:24:02.564415 4888 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" Oct 06 16:24:02 crc kubenswrapper[4888]: I1006 16:24:02.565551 4888 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c"} pod="openshift-machine-config-operator/machine-config-daemon-spjkk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 16:24:02 crc kubenswrapper[4888]: I1006 16:24:02.565658 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" containerID="cri-o://c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" gracePeriod=600 Oct 06 16:24:02 crc kubenswrapper[4888]: E1006 16:24:02.732567 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:24:03 crc kubenswrapper[4888]: I1006 16:24:03.455595 4888 generic.go:334] "Generic (PLEG): container finished" podID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" exitCode=0 Oct 06 16:24:03 crc kubenswrapper[4888]: I1006 16:24:03.455731 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerDied","Data":"c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c"} Oct 06 16:24:03 crc kubenswrapper[4888]: I1006 16:24:03.455942 4888 scope.go:117] "RemoveContainer" containerID="7d3611487d0528d0af0215a407358f0276ad7ed9d5fdb668d5150c072db9269d" Oct 06 16:24:03 crc kubenswrapper[4888]: I1006 16:24:03.456554 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:24:03 crc kubenswrapper[4888]: E1006 16:24:03.456832 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:24:16 crc kubenswrapper[4888]: I1006 16:24:16.921469 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:24:16 crc kubenswrapper[4888]: E1006 16:24:16.922085 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:24:27 crc kubenswrapper[4888]: I1006 16:24:27.921522 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:24:27 crc kubenswrapper[4888]: E1006 16:24:27.922486 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:24:29 crc kubenswrapper[4888]: I1006 16:24:29.908188 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fc6dd5fdd-d2l7d_0696f900-55ec-420c-a00a-a8e749b36aa0/barbican-api/0.log" Oct 06 16:24:29 crc kubenswrapper[4888]: I1006 16:24:29.993760 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fc6dd5fdd-d2l7d_0696f900-55ec-420c-a00a-a8e749b36aa0/barbican-api-log/0.log" Oct 06 16:24:30 crc kubenswrapper[4888]: I1006 16:24:30.230120 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6957c776-hxrm5_6f1f4bff-0c57-4619-bc07-90aec0cc064c/barbican-keystone-listener/0.log" Oct 06 16:24:30 crc kubenswrapper[4888]: I1006 16:24:30.278723 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6b6957c776-hxrm5_6f1f4bff-0c57-4619-bc07-90aec0cc064c/barbican-keystone-listener-log/0.log" Oct 06 16:24:30 crc kubenswrapper[4888]: I1006 16:24:30.471964 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-67fcdb97f5-n5qtk_6cf865ad-c3ca-4633-8f09-12865f2e3772/barbican-worker/0.log" Oct 06 16:24:30 crc kubenswrapper[4888]: I1006 16:24:30.543766 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-67fcdb97f5-n5qtk_6cf865ad-c3ca-4633-8f09-12865f2e3772/barbican-worker-log/0.log" Oct 06 16:24:30 crc kubenswrapper[4888]: I1006 16:24:30.863055 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_224634c8-9de5-4ab5-a57a-7785afac360c/ceilometer-central-agent/0.log" Oct 06 16:24:30 crc kubenswrapper[4888]: I1006 16:24:30.901791 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_224634c8-9de5-4ab5-a57a-7785afac360c/ceilometer-notification-agent/0.log" Oct 06 16:24:30 crc kubenswrapper[4888]: I1006 16:24:30.991620 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_224634c8-9de5-4ab5-a57a-7785afac360c/proxy-httpd/0.log" Oct 06 16:24:31 crc kubenswrapper[4888]: I1006 16:24:31.065732 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_224634c8-9de5-4ab5-a57a-7785afac360c/sg-core/0.log" Oct 06 16:24:31 crc kubenswrapper[4888]: I1006 16:24:31.246112 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c5c59923-504c-4e97-bcde-a0b2af5adab1/cinder-api/0.log" Oct 06 16:24:31 crc kubenswrapper[4888]: I1006 16:24:31.425234 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c5c59923-504c-4e97-bcde-a0b2af5adab1/cinder-api-log/0.log" Oct 06 16:24:31 crc kubenswrapper[4888]: I1006 16:24:31.432214 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7176912c-2b8c-46ac-b8e6-8067439a2720/cinder-scheduler/0.log" Oct 06 16:24:31 crc kubenswrapper[4888]: I1006 16:24:31.680848 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7176912c-2b8c-46ac-b8e6-8067439a2720/probe/0.log" Oct 06 16:24:31 crc kubenswrapper[4888]: I1006 16:24:31.790462 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-w2hd2_4b78ed62-f8e2-4e9a-8517-e1005c50b536/init/0.log" Oct 06 16:24:31 crc kubenswrapper[4888]: I1006 16:24:31.947262 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-w2hd2_4b78ed62-f8e2-4e9a-8517-e1005c50b536/init/0.log" Oct 06 16:24:31 crc kubenswrapper[4888]: I1006 16:24:31.984440 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-79dc84bdb7-w2hd2_4b78ed62-f8e2-4e9a-8517-e1005c50b536/dnsmasq-dns/0.log" Oct 06 16:24:32 crc kubenswrapper[4888]: I1006 16:24:32.199364 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ca688e57-9165-4c51-8de7-366eebeb8596/glance-httpd/0.log" Oct 06 16:24:32 crc kubenswrapper[4888]: I1006 16:24:32.255390 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ca688e57-9165-4c51-8de7-366eebeb8596/glance-log/0.log" Oct 06 16:24:32 crc kubenswrapper[4888]: I1006 16:24:32.435106 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f820ff6f-d5ea-4422-991f-a982fb9b563d/glance-httpd/0.log" Oct 06 16:24:32 crc kubenswrapper[4888]: I1006 16:24:32.500004 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f820ff6f-d5ea-4422-991f-a982fb9b563d/glance-log/0.log" Oct 06 16:24:33 crc kubenswrapper[4888]: I1006 16:24:33.085238 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6588c6d648-nnxsr_64852c10-aeb0-424b-a601-0b46718c0fc7/horizon/0.log" Oct 06 16:24:33 crc kubenswrapper[4888]: I1006 16:24:33.541777 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-598ff49d66-g94f6_5a39a6de-4690-4845-af62-7cb05de93909/keystone-api/0.log" Oct 06 16:24:33 crc kubenswrapper[4888]: I1006 16:24:33.650599 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6588c6d648-nnxsr_64852c10-aeb0-424b-a601-0b46718c0fc7/horizon-log/0.log" Oct 06 16:24:33 crc kubenswrapper[4888]: I1006 16:24:33.834468 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29329441-v7tzm_cd50e72b-66b9-49db-9a55-7aff1a106a03/keystone-cron/0.log" Oct 06 16:24:33 crc kubenswrapper[4888]: I1006 16:24:33.995974 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0cb31d67-e313-48ce-8230-513d88d01445/kube-state-metrics/0.log" Oct 06 16:24:34 crc kubenswrapper[4888]: I1006 16:24:34.300544 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7dc98fc94f-nlvnj_39bde926-1f59-45eb-8f71-841d380f9c5d/neutron-api/0.log" Oct 06 16:24:34 crc kubenswrapper[4888]: I1006 16:24:34.380332 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7dc98fc94f-nlvnj_39bde926-1f59-45eb-8f71-841d380f9c5d/neutron-httpd/0.log" Oct 06 16:24:34 crc kubenswrapper[4888]: I1006 16:24:34.864978 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e33f0083-ea10-494e-858d-94ba18279687/nova-api-log/0.log" Oct 06 16:24:35 crc kubenswrapper[4888]: I1006 16:24:35.203173 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e33f0083-ea10-494e-858d-94ba18279687/nova-api-api/0.log" Oct 06 16:24:35 crc kubenswrapper[4888]: I1006 16:24:35.221685 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_bf04c24d-a750-46b9-8b0a-26cb500c2494/nova-cell0-conductor-conductor/0.log" Oct 06 16:24:35 crc kubenswrapper[4888]: I1006 16:24:35.551574 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_873575d3-d1c1-4c76-a0f1-d9cb94724fe2/nova-cell1-conductor-conductor/0.log" Oct 06 16:24:35 crc kubenswrapper[4888]: I1006 16:24:35.619569 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d3a38734-2da1-478a-8d16-dae1c736838e/nova-cell1-novncproxy-novncproxy/0.log" Oct 06 16:24:35 crc kubenswrapper[4888]: I1006 16:24:35.876816 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c4bcd77d-19fd-4f69-8879-906569e3c709/nova-metadata-log/0.log" Oct 06 16:24:36 crc kubenswrapper[4888]: I1006 16:24:36.828846 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3eb8ff50-0943-4755-875f-80b51c818468/nova-scheduler-scheduler/0.log" Oct 06 16:24:37 crc kubenswrapper[4888]: I1006 16:24:37.233651 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5/mysql-bootstrap/0.log" Oct 06 16:24:37 crc kubenswrapper[4888]: I1006 16:24:37.498382 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5/mysql-bootstrap/0.log" Oct 06 16:24:37 crc kubenswrapper[4888]: I1006 16:24:37.545804 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3d44f4c2-c7ba-4bb2-b2e2-16fafc256ea5/galera/0.log" Oct 06 16:24:38 crc kubenswrapper[4888]: I1006 16:24:38.047173 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b6265e0c-c180-4f1b-9d3b-73321ed1caf5/mysql-bootstrap/0.log" Oct 06 16:24:38 crc kubenswrapper[4888]: I1006 16:24:38.301990 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b6265e0c-c180-4f1b-9d3b-73321ed1caf5/mysql-bootstrap/0.log" Oct 06 16:24:38 crc kubenswrapper[4888]: I1006 16:24:38.382744 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b6265e0c-c180-4f1b-9d3b-73321ed1caf5/galera/0.log" Oct 06 16:24:38 crc kubenswrapper[4888]: I1006 16:24:38.632922 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c4bcd77d-19fd-4f69-8879-906569e3c709/nova-metadata-metadata/0.log" Oct 06 16:24:38 crc kubenswrapper[4888]: I1006 16:24:38.677617 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_655523f3-6f3b-4675-8b5a-4c0451a185ca/openstackclient/0.log" Oct 06 16:24:38 crc kubenswrapper[4888]: I1006 16:24:38.878655 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-89sjh_8106dadc-62e5-4f81-9e15-74c474f1c111/openstack-network-exporter/0.log" Oct 06 16:24:39 crc kubenswrapper[4888]: I1006 16:24:39.196142 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nzvpg_82705879-10de-4927-946c-c55766069d1b/ovn-controller/0.log" Oct 06 16:24:39 crc kubenswrapper[4888]: I1006 16:24:39.294206 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mczwz_737bd423-e5c2-4a4e-9463-56e1bb95b101/ovsdb-server-init/0.log" Oct 06 16:24:39 crc kubenswrapper[4888]: I1006 16:24:39.679838 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mczwz_737bd423-e5c2-4a4e-9463-56e1bb95b101/ovsdb-server-init/0.log" Oct 06 16:24:39 crc kubenswrapper[4888]: I1006 16:24:39.696881 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mczwz_737bd423-e5c2-4a4e-9463-56e1bb95b101/ovsdb-server/0.log" Oct 06 16:24:39 crc kubenswrapper[4888]: I1006 16:24:39.756394 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mczwz_737bd423-e5c2-4a4e-9463-56e1bb95b101/ovs-vswitchd/0.log" Oct 06 16:24:40 crc kubenswrapper[4888]: I1006 16:24:40.030486 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26b90491-f5c9-42fd-b6a0-f21d2771566a/openstack-network-exporter/0.log" Oct 06 16:24:40 crc kubenswrapper[4888]: I1006 16:24:40.057602 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26b90491-f5c9-42fd-b6a0-f21d2771566a/ovn-northd/0.log" Oct 06 16:24:40 crc kubenswrapper[4888]: I1006 16:24:40.274011 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e5b21496-4576-4872-9a7b-a0fa475466a6/ovsdbserver-nb/0.log" Oct 06 16:24:40 crc kubenswrapper[4888]: I1006 16:24:40.282841 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e5b21496-4576-4872-9a7b-a0fa475466a6/openstack-network-exporter/0.log" Oct 06 16:24:40 crc kubenswrapper[4888]: I1006 16:24:40.616334 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_74ff743d-532c-4a3a-bf4a-967b9edca039/ovsdbserver-sb/0.log" Oct 06 16:24:40 crc kubenswrapper[4888]: I1006 16:24:40.619619 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_74ff743d-532c-4a3a-bf4a-967b9edca039/openstack-network-exporter/0.log" Oct 06 16:24:40 crc kubenswrapper[4888]: I1006 16:24:40.938350 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:24:40 crc kubenswrapper[4888]: E1006 16:24:40.938544 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:24:41 crc kubenswrapper[4888]: I1006 16:24:41.241901 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9885565f4-9j2hk_3fe4a962-ac46-4084-a054-4b7499863e84/placement-api/0.log" Oct 06 16:24:41 crc kubenswrapper[4888]: I1006 16:24:41.258664 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9885565f4-9j2hk_3fe4a962-ac46-4084-a054-4b7499863e84/placement-log/0.log" Oct 06 16:24:41 crc kubenswrapper[4888]: I1006 16:24:41.466993 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6bbff15-928d-43f2-8a4b-0c0ee40d73a5/setup-container/0.log" Oct 06 16:24:41 crc kubenswrapper[4888]: I1006 16:24:41.814702 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6bbff15-928d-43f2-8a4b-0c0ee40d73a5/rabbitmq/0.log" Oct 06 16:24:41 crc kubenswrapper[4888]: I1006 16:24:41.831324 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_a6bbff15-928d-43f2-8a4b-0c0ee40d73a5/setup-container/0.log" Oct 06 16:24:42 crc kubenswrapper[4888]: I1006 16:24:42.037700 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_80df2079-12d2-4a47-837c-69d8f26209a2/setup-container/0.log" Oct 06 16:24:42 crc kubenswrapper[4888]: I1006 16:24:42.307533 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_80df2079-12d2-4a47-837c-69d8f26209a2/rabbitmq/0.log" Oct 06 16:24:42 crc kubenswrapper[4888]: I1006 16:24:42.313083 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_80df2079-12d2-4a47-837c-69d8f26209a2/setup-container/0.log" Oct 06 16:24:42 crc kubenswrapper[4888]: I1006 16:24:42.558489 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5786cd685c-vnwnq_7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9/proxy-server/0.log" Oct 06 16:24:42 crc kubenswrapper[4888]: I1006 16:24:42.704011 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5786cd685c-vnwnq_7ca1e872-3246-4eda-8c1d-bafe1fd9c7c9/proxy-httpd/0.log" Oct 06 16:24:42 crc kubenswrapper[4888]: I1006 16:24:42.758381 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qcnvj_4aa2563c-6959-448c-9708-99f647cd24e1/swift-ring-rebalance/0.log" Oct 06 16:24:42 crc kubenswrapper[4888]: I1006 16:24:42.973241 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/account-auditor/0.log" Oct 06 16:24:42 crc kubenswrapper[4888]: I1006 16:24:42.977910 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/account-reaper/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.272545 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/account-replicator/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.281461 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/container-auditor/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.299838 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/account-server/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.507598 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/container-replicator/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.513389 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/container-server/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.618269 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/container-updater/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.754489 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/object-auditor/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.786870 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/object-expirer/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.944566 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/object-replicator/0.log" Oct 06 16:24:43 crc kubenswrapper[4888]: I1006 16:24:43.988366 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/object-updater/0.log" Oct 06 16:24:44 crc kubenswrapper[4888]: I1006 16:24:44.008953 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/object-server/0.log" Oct 06 16:24:44 crc kubenswrapper[4888]: I1006 16:24:44.171663 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/rsync/0.log" Oct 06 16:24:44 crc kubenswrapper[4888]: I1006 16:24:44.227098 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_277682ba-0d72-43d5-b52c-59f6b02b2963/swift-recon-cron/0.log" Oct 06 16:24:46 crc kubenswrapper[4888]: I1006 16:24:46.798252 4888 generic.go:334] "Generic (PLEG): container finished" podID="5949d901-cb3f-4c51-93ae-856be2bcfe0e" containerID="bb687567ac180cb3371b1768891468427d0bd19b5810e2c71020a6974e92263e" exitCode=0 Oct 06 16:24:46 crc kubenswrapper[4888]: I1006 16:24:46.798453 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-5trq8" event={"ID":"5949d901-cb3f-4c51-93ae-856be2bcfe0e","Type":"ContainerDied","Data":"bb687567ac180cb3371b1768891468427d0bd19b5810e2c71020a6974e92263e"} Oct 06 16:24:47 crc kubenswrapper[4888]: I1006 16:24:47.948051 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.013392 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-5trq8"] Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.025656 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-5trq8"] Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.061626 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5949d901-cb3f-4c51-93ae-856be2bcfe0e-host\") pod \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\" (UID: \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\") " Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.061789 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5949d901-cb3f-4c51-93ae-856be2bcfe0e-host" (OuterVolumeSpecName: "host") pod "5949d901-cb3f-4c51-93ae-856be2bcfe0e" (UID: "5949d901-cb3f-4c51-93ae-856be2bcfe0e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.061942 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4s8r\" (UniqueName: \"kubernetes.io/projected/5949d901-cb3f-4c51-93ae-856be2bcfe0e-kube-api-access-k4s8r\") pod \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\" (UID: \"5949d901-cb3f-4c51-93ae-856be2bcfe0e\") " Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.063187 4888 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5949d901-cb3f-4c51-93ae-856be2bcfe0e-host\") on node \"crc\" DevicePath \"\"" Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.087539 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5949d901-cb3f-4c51-93ae-856be2bcfe0e-kube-api-access-k4s8r" (OuterVolumeSpecName: "kube-api-access-k4s8r") pod "5949d901-cb3f-4c51-93ae-856be2bcfe0e" (UID: "5949d901-cb3f-4c51-93ae-856be2bcfe0e"). InnerVolumeSpecName "kube-api-access-k4s8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.167983 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4s8r\" (UniqueName: \"kubernetes.io/projected/5949d901-cb3f-4c51-93ae-856be2bcfe0e-kube-api-access-k4s8r\") on node \"crc\" DevicePath \"\"" Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.785926 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_49e0f5f3-c656-4e89-a5d6-73443af4afc4/memcached/0.log" Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.817564 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d1ce9ff87d1ad55e161f6e850510956651b91710cad8df72803b11c50338e08" Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.817620 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-5trq8" Oct 06 16:24:48 crc kubenswrapper[4888]: I1006 16:24:48.932746 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5949d901-cb3f-4c51-93ae-856be2bcfe0e" path="/var/lib/kubelet/pods/5949d901-cb3f-4c51-93ae-856be2bcfe0e/volumes" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.222306 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-fm9q4"] Oct 06 16:24:49 crc kubenswrapper[4888]: E1006 16:24:49.222668 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5949d901-cb3f-4c51-93ae-856be2bcfe0e" containerName="container-00" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.222681 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="5949d901-cb3f-4c51-93ae-856be2bcfe0e" containerName="container-00" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.222897 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="5949d901-cb3f-4c51-93ae-856be2bcfe0e" containerName="container-00" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.223518 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.226187 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qhv7l"/"default-dockercfg-p5wm2" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.282614 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f572666f-7359-41a3-888e-6da14598f126-host\") pod \"crc-debug-fm9q4\" (UID: \"f572666f-7359-41a3-888e-6da14598f126\") " pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.282788 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf7bz\" (UniqueName: \"kubernetes.io/projected/f572666f-7359-41a3-888e-6da14598f126-kube-api-access-kf7bz\") pod \"crc-debug-fm9q4\" (UID: \"f572666f-7359-41a3-888e-6da14598f126\") " pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.384447 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf7bz\" (UniqueName: \"kubernetes.io/projected/f572666f-7359-41a3-888e-6da14598f126-kube-api-access-kf7bz\") pod \"crc-debug-fm9q4\" (UID: \"f572666f-7359-41a3-888e-6da14598f126\") " pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.384547 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f572666f-7359-41a3-888e-6da14598f126-host\") pod \"crc-debug-fm9q4\" (UID: \"f572666f-7359-41a3-888e-6da14598f126\") " pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.384712 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f572666f-7359-41a3-888e-6da14598f126-host\") pod \"crc-debug-fm9q4\" (UID: \"f572666f-7359-41a3-888e-6da14598f126\") " pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.403438 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf7bz\" (UniqueName: \"kubernetes.io/projected/f572666f-7359-41a3-888e-6da14598f126-kube-api-access-kf7bz\") pod \"crc-debug-fm9q4\" (UID: \"f572666f-7359-41a3-888e-6da14598f126\") " pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.538465 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.825291 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" event={"ID":"f572666f-7359-41a3-888e-6da14598f126","Type":"ContainerStarted","Data":"6d01f970fa4dfd285735ba78c779d899fecf218170fde357d79434601de08630"} Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.825765 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" event={"ID":"f572666f-7359-41a3-888e-6da14598f126","Type":"ContainerStarted","Data":"05ff8971479bd50930baab33087d0eae6de4931bb5522b4b90fffaaaf1ce2c74"} Oct 06 16:24:49 crc kubenswrapper[4888]: I1006 16:24:49.837615 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" podStartSLOduration=0.837600345 podStartE2EDuration="837.600345ms" podCreationTimestamp="2025-10-06 16:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 16:24:49.835049695 +0000 UTC m=+5029.647400413" watchObservedRunningTime="2025-10-06 16:24:49.837600345 +0000 UTC m=+5029.649951063" Oct 06 16:24:50 crc kubenswrapper[4888]: I1006 16:24:50.845719 4888 generic.go:334] "Generic (PLEG): container finished" podID="f572666f-7359-41a3-888e-6da14598f126" containerID="6d01f970fa4dfd285735ba78c779d899fecf218170fde357d79434601de08630" exitCode=0 Oct 06 16:24:50 crc kubenswrapper[4888]: I1006 16:24:50.845874 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" event={"ID":"f572666f-7359-41a3-888e-6da14598f126","Type":"ContainerDied","Data":"6d01f970fa4dfd285735ba78c779d899fecf218170fde357d79434601de08630"} Oct 06 16:24:51 crc kubenswrapper[4888]: I1006 16:24:51.950023 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.031474 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf7bz\" (UniqueName: \"kubernetes.io/projected/f572666f-7359-41a3-888e-6da14598f126-kube-api-access-kf7bz\") pod \"f572666f-7359-41a3-888e-6da14598f126\" (UID: \"f572666f-7359-41a3-888e-6da14598f126\") " Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.031771 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f572666f-7359-41a3-888e-6da14598f126-host\") pod \"f572666f-7359-41a3-888e-6da14598f126\" (UID: \"f572666f-7359-41a3-888e-6da14598f126\") " Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.031851 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f572666f-7359-41a3-888e-6da14598f126-host" (OuterVolumeSpecName: "host") pod "f572666f-7359-41a3-888e-6da14598f126" (UID: "f572666f-7359-41a3-888e-6da14598f126"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.032166 4888 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f572666f-7359-41a3-888e-6da14598f126-host\") on node \"crc\" DevicePath \"\"" Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.048346 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f572666f-7359-41a3-888e-6da14598f126-kube-api-access-kf7bz" (OuterVolumeSpecName: "kube-api-access-kf7bz") pod "f572666f-7359-41a3-888e-6da14598f126" (UID: "f572666f-7359-41a3-888e-6da14598f126"). InnerVolumeSpecName "kube-api-access-kf7bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.134137 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf7bz\" (UniqueName: \"kubernetes.io/projected/f572666f-7359-41a3-888e-6da14598f126-kube-api-access-kf7bz\") on node \"crc\" DevicePath \"\"" Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.864527 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" event={"ID":"f572666f-7359-41a3-888e-6da14598f126","Type":"ContainerDied","Data":"05ff8971479bd50930baab33087d0eae6de4931bb5522b4b90fffaaaf1ce2c74"} Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.864569 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05ff8971479bd50930baab33087d0eae6de4931bb5522b4b90fffaaaf1ce2c74" Oct 06 16:24:52 crc kubenswrapper[4888]: I1006 16:24:52.864621 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-fm9q4" Oct 06 16:24:54 crc kubenswrapper[4888]: I1006 16:24:54.659215 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-fm9q4"] Oct 06 16:24:54 crc kubenswrapper[4888]: I1006 16:24:54.666214 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-fm9q4"] Oct 06 16:24:54 crc kubenswrapper[4888]: I1006 16:24:54.924744 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:24:54 crc kubenswrapper[4888]: E1006 16:24:54.925028 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:24:54 crc kubenswrapper[4888]: I1006 16:24:54.933055 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f572666f-7359-41a3-888e-6da14598f126" path="/var/lib/kubelet/pods/f572666f-7359-41a3-888e-6da14598f126/volumes" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.232872 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-84vw2"] Oct 06 16:24:56 crc kubenswrapper[4888]: E1006 16:24:56.233441 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f572666f-7359-41a3-888e-6da14598f126" containerName="container-00" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.233452 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="f572666f-7359-41a3-888e-6da14598f126" containerName="container-00" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.233660 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="f572666f-7359-41a3-888e-6da14598f126" containerName="container-00" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.234238 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.236385 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qhv7l"/"default-dockercfg-p5wm2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.312127 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqst\" (UniqueName: \"kubernetes.io/projected/011689e3-682c-4488-91fd-8b8c863fe845-kube-api-access-hdqst\") pod \"crc-debug-84vw2\" (UID: \"011689e3-682c-4488-91fd-8b8c863fe845\") " pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.312520 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/011689e3-682c-4488-91fd-8b8c863fe845-host\") pod \"crc-debug-84vw2\" (UID: \"011689e3-682c-4488-91fd-8b8c863fe845\") " pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.414664 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/011689e3-682c-4488-91fd-8b8c863fe845-host\") pod \"crc-debug-84vw2\" (UID: \"011689e3-682c-4488-91fd-8b8c863fe845\") " pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.414742 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqst\" (UniqueName: \"kubernetes.io/projected/011689e3-682c-4488-91fd-8b8c863fe845-kube-api-access-hdqst\") pod \"crc-debug-84vw2\" (UID: \"011689e3-682c-4488-91fd-8b8c863fe845\") " pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.415089 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/011689e3-682c-4488-91fd-8b8c863fe845-host\") pod \"crc-debug-84vw2\" (UID: \"011689e3-682c-4488-91fd-8b8c863fe845\") " pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.431001 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqst\" (UniqueName: \"kubernetes.io/projected/011689e3-682c-4488-91fd-8b8c863fe845-kube-api-access-hdqst\") pod \"crc-debug-84vw2\" (UID: \"011689e3-682c-4488-91fd-8b8c863fe845\") " pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.549888 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.892752 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-84vw2" event={"ID":"011689e3-682c-4488-91fd-8b8c863fe845","Type":"ContainerStarted","Data":"982304d7eec30a896f2cb265c1ce4e6d3b0cc294be113861e3c985f47427c187"} Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.893272 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-84vw2" event={"ID":"011689e3-682c-4488-91fd-8b8c863fe845","Type":"ContainerStarted","Data":"124986db7e0bb9a95b28b2bf97de0a1130ad0d61a419cf448f61dea15d4aa16b"} Oct 06 16:24:56 crc kubenswrapper[4888]: I1006 16:24:56.919126 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qhv7l/crc-debug-84vw2" podStartSLOduration=0.919107933 podStartE2EDuration="919.107933ms" podCreationTimestamp="2025-10-06 16:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 16:24:56.91616045 +0000 UTC m=+5036.728511168" watchObservedRunningTime="2025-10-06 16:24:56.919107933 +0000 UTC m=+5036.731458651" Oct 06 16:24:57 crc kubenswrapper[4888]: I1006 16:24:57.916846 4888 generic.go:334] "Generic (PLEG): container finished" podID="011689e3-682c-4488-91fd-8b8c863fe845" containerID="982304d7eec30a896f2cb265c1ce4e6d3b0cc294be113861e3c985f47427c187" exitCode=0 Oct 06 16:24:57 crc kubenswrapper[4888]: I1006 16:24:57.917077 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/crc-debug-84vw2" event={"ID":"011689e3-682c-4488-91fd-8b8c863fe845","Type":"ContainerDied","Data":"982304d7eec30a896f2cb265c1ce4e6d3b0cc294be113861e3c985f47427c187"} Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.028282 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.062867 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-84vw2"] Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.068761 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhv7l/crc-debug-84vw2"] Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.171991 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/011689e3-682c-4488-91fd-8b8c863fe845-host\") pod \"011689e3-682c-4488-91fd-8b8c863fe845\" (UID: \"011689e3-682c-4488-91fd-8b8c863fe845\") " Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.172056 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/011689e3-682c-4488-91fd-8b8c863fe845-host" (OuterVolumeSpecName: "host") pod "011689e3-682c-4488-91fd-8b8c863fe845" (UID: "011689e3-682c-4488-91fd-8b8c863fe845"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.172250 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdqst\" (UniqueName: \"kubernetes.io/projected/011689e3-682c-4488-91fd-8b8c863fe845-kube-api-access-hdqst\") pod \"011689e3-682c-4488-91fd-8b8c863fe845\" (UID: \"011689e3-682c-4488-91fd-8b8c863fe845\") " Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.173788 4888 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/011689e3-682c-4488-91fd-8b8c863fe845-host\") on node \"crc\" DevicePath \"\"" Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.177375 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011689e3-682c-4488-91fd-8b8c863fe845-kube-api-access-hdqst" (OuterVolumeSpecName: "kube-api-access-hdqst") pod "011689e3-682c-4488-91fd-8b8c863fe845" (UID: "011689e3-682c-4488-91fd-8b8c863fe845"). InnerVolumeSpecName "kube-api-access-hdqst". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.275332 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdqst\" (UniqueName: \"kubernetes.io/projected/011689e3-682c-4488-91fd-8b8c863fe845-kube-api-access-hdqst\") on node \"crc\" DevicePath \"\"" Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.933296 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124986db7e0bb9a95b28b2bf97de0a1130ad0d61a419cf448f61dea15d4aa16b" Oct 06 16:24:59 crc kubenswrapper[4888]: I1006 16:24:59.933352 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/crc-debug-84vw2" Oct 06 16:25:00 crc kubenswrapper[4888]: I1006 16:25:00.933456 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011689e3-682c-4488-91fd-8b8c863fe845" path="/var/lib/kubelet/pods/011689e3-682c-4488-91fd-8b8c863fe845/volumes" Oct 06 16:25:06 crc kubenswrapper[4888]: I1006 16:25:06.921372 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:25:06 crc kubenswrapper[4888]: E1006 16:25:06.921985 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:25:13 crc kubenswrapper[4888]: I1006 16:25:13.616898 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7_f5eae805-9f70-47d7-b612-6ee46380d5a4/util/0.log" Oct 06 16:25:13 crc kubenswrapper[4888]: I1006 16:25:13.846504 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7_f5eae805-9f70-47d7-b612-6ee46380d5a4/util/0.log" Oct 06 16:25:13 crc kubenswrapper[4888]: I1006 16:25:13.897046 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7_f5eae805-9f70-47d7-b612-6ee46380d5a4/pull/0.log" Oct 06 16:25:13 crc kubenswrapper[4888]: I1006 16:25:13.923132 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7_f5eae805-9f70-47d7-b612-6ee46380d5a4/pull/0.log" Oct 06 16:25:13 crc kubenswrapper[4888]: I1006 16:25:13.993797 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7_f5eae805-9f70-47d7-b612-6ee46380d5a4/util/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.064582 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7_f5eae805-9f70-47d7-b612-6ee46380d5a4/pull/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.120385 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0fb7be3d855955d7072cc0fb22ee445dab10bffba7261b457a5d424b63rxpd7_f5eae805-9f70-47d7-b612-6ee46380d5a4/extract/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.178464 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-kzz5z_a3e1786a-e54d-4a41-a974-fab79300a4b9/kube-rbac-proxy/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.279114 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-58c4cd55f4-kzz5z_a3e1786a-e54d-4a41-a974-fab79300a4b9/manager/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.353657 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-xwmzg_4871270f-8f10-42b8-880a-7ff6ab0d1476/kube-rbac-proxy/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.425749 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7d4d4f8d-xwmzg_4871270f-8f10-42b8-880a-7ff6ab0d1476/manager/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.615896 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-wdknr_919edfbf-4b21-44cb-b821-3d3294a2beb1/kube-rbac-proxy/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.628395 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-wdknr_919edfbf-4b21-44cb-b821-3d3294a2beb1/manager/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.693332 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-6tfdd_18e5b74c-d61b-4916-a059-5daa7e2b6277/kube-rbac-proxy/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.875475 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-nn686_c951d569-cbb5-4525-b66d-07f2473db97a/kube-rbac-proxy/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.878242 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5dc44df7d5-6tfdd_18e5b74c-d61b-4916-a059-5daa7e2b6277/manager/0.log" Oct 06 16:25:14 crc kubenswrapper[4888]: I1006 16:25:14.918875 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-54b4974c45-nn686_c951d569-cbb5-4525-b66d-07f2473db97a/manager/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.129645 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-gw6sx_ba1d4452-d22b-4724-93eb-bf70500f2040/kube-rbac-proxy/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.175494 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-76d5b87f47-gw6sx_ba1d4452-d22b-4724-93eb-bf70500f2040/manager/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.255558 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-nbssz_7d3c45dc-4628-4b73-b123-dda0b0cb4d72/kube-rbac-proxy/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.439107 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-jxh84_c7b475c9-9590-41e5-9bd4-d7f9fb04958c/kube-rbac-proxy/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.452517 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-658588b8c9-nbssz_7d3c45dc-4628-4b73-b123-dda0b0cb4d72/manager/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.487959 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-649675d675-jxh84_c7b475c9-9590-41e5-9bd4-d7f9fb04958c/manager/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.686701 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-4p87x_614bf51f-2fa7-48ae-a9e2-2f371656f326/kube-rbac-proxy/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.698861 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b5ccf6d9c-4p87x_614bf51f-2fa7-48ae-a9e2-2f371656f326/manager/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.849071 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-mmfzd_5a1c3c1c-2a06-49a0-9189-acbcdd0053c6/manager/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.886162 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-65d89cfd9f-mmfzd_5a1c3c1c-2a06-49a0-9189-acbcdd0053c6/kube-rbac-proxy/0.log" Oct 06 16:25:15 crc kubenswrapper[4888]: I1006 16:25:15.954422 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-4m69r_0ab9d525-93a8-4920-a6e4-e70dfd942ce3/kube-rbac-proxy/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.110787 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6cd6d7bdf5-4m69r_0ab9d525-93a8-4920-a6e4-e70dfd942ce3/manager/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.214729 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-wwg8m_e3c80163-0837-4095-96c9-2d51ac49b7c4/kube-rbac-proxy/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.233511 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-8d984cc4d-wwg8m_e3c80163-0837-4095-96c9-2d51ac49b7c4/manager/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.415135 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-z5nwh_b725fd76-028e-4dc0-bbc5-8d18cf1e667b/kube-rbac-proxy/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.630817 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-pdwl5_b798c5fc-8252-4301-b5f2-6d47107266c9/kube-rbac-proxy/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.633613 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c7fc454ff-z5nwh_b725fd76-028e-4dc0-bbc5-8d18cf1e667b/manager/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.662096 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7468f855d8-pdwl5_b798c5fc-8252-4301-b5f2-6d47107266c9/manager/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.870299 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4_216a208a-e34a-4796-a72d-79fb0dba1491/manager/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.891583 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5dfbbd665c9pmr4_216a208a-e34a-4796-a72d-79fb0dba1491/kube-rbac-proxy/0.log" Oct 06 16:25:16 crc kubenswrapper[4888]: I1006 16:25:16.958700 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-847bc59d9d-djwkk_3ad05938-27d0-4006-a663-5f3ae2526053/kube-rbac-proxy/0.log" Oct 06 16:25:17 crc kubenswrapper[4888]: I1006 16:25:17.113304 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6bbd86684c-xs78r_cc3cce66-3f1e-4348-8927-9a809f383102/kube-rbac-proxy/0.log" Oct 06 16:25:17 crc kubenswrapper[4888]: I1006 16:25:17.253195 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6bbd86684c-xs78r_cc3cce66-3f1e-4348-8927-9a809f383102/operator/0.log" Oct 06 16:25:17 crc kubenswrapper[4888]: I1006 16:25:17.394338 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-s88cv_2ec65a14-9b20-4a08-853e-9beb385d3883/registry-server/0.log" Oct 06 16:25:17 crc kubenswrapper[4888]: I1006 16:25:17.504086 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6d8b6f9b9-jsk2z_23fa3eba-f10e-42b2-bc39-2df07d518a0e/kube-rbac-proxy/0.log" Oct 06 16:25:17 crc kubenswrapper[4888]: I1006 16:25:17.585827 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6d8b6f9b9-jsk2z_23fa3eba-f10e-42b2-bc39-2df07d518a0e/manager/0.log" Oct 06 16:25:17 crc kubenswrapper[4888]: I1006 16:25:17.801233 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-m5c7d_edac5ad0-266b-449a-a2a8-95eb9afb0348/manager/0.log" Oct 06 16:25:17 crc kubenswrapper[4888]: I1006 16:25:17.841719 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-54689d9f88-m5c7d_edac5ad0-266b-449a-a2a8-95eb9afb0348/kube-rbac-proxy/0.log" Oct 06 16:25:17 crc kubenswrapper[4888]: I1006 16:25:17.851137 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-847bc59d9d-djwkk_3ad05938-27d0-4006-a663-5f3ae2526053/manager/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.012834 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-2hsqr_a4a728bc-e48c-43b7-b143-aded2946ee76/operator/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.026621 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-qtk96_470e257a-9b82-4ffc-88a2-974afe3d6abb/kube-rbac-proxy/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.132787 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-qtk96_470e257a-9b82-4ffc-88a2-974afe3d6abb/manager/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.235422 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-zdzkx_b3acc46c-a819-4e19-8534-34855edcdbaa/kube-rbac-proxy/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.338263 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5d4d74dd89-zdzkx_b3acc46c-a819-4e19-8534-34855edcdbaa/manager/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.367833 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-c6fgp_99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2/manager/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.373477 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-c6fgp_99965d92-2fb3-4bf4-8ccf-ab574aa1a4c2/kube-rbac-proxy/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.505484 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-vbgpl_cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9/kube-rbac-proxy/0.log" Oct 06 16:25:18 crc kubenswrapper[4888]: I1006 16:25:18.533007 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6cbc6dd547-vbgpl_cc6afaf6-e217-4d09-8cdf-d0ad4dd79db9/manager/0.log" Oct 06 16:25:21 crc kubenswrapper[4888]: I1006 16:25:21.921040 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:25:21 crc kubenswrapper[4888]: E1006 16:25:21.923283 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:25:33 crc kubenswrapper[4888]: I1006 16:25:33.921270 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:25:33 crc kubenswrapper[4888]: E1006 16:25:33.922153 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:25:34 crc kubenswrapper[4888]: I1006 16:25:34.195340 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f792m_5686c4e3-454d-4282-88d5-326ee00e2e2a/control-plane-machine-set-operator/0.log" Oct 06 16:25:34 crc kubenswrapper[4888]: I1006 16:25:34.321051 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4r4q2_9ef05c84-15d5-413d-baee-70e7ae0e2a8f/kube-rbac-proxy/0.log" Oct 06 16:25:34 crc kubenswrapper[4888]: I1006 16:25:34.399202 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4r4q2_9ef05c84-15d5-413d-baee-70e7ae0e2a8f/machine-api-operator/0.log" Oct 06 16:25:45 crc kubenswrapper[4888]: I1006 16:25:45.921284 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:25:45 crc kubenswrapper[4888]: E1006 16:25:45.922013 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:25:45 crc kubenswrapper[4888]: I1006 16:25:45.973543 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7fvrb_23c3654d-b36a-455a-a78c-3279d3ba9d23/cert-manager-controller/0.log" Oct 06 16:25:46 crc kubenswrapper[4888]: I1006 16:25:46.182580 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-pd9dj_a76a2cfe-bd91-4f96-86b8-9449ebec89d9/cert-manager-webhook/0.log" Oct 06 16:25:46 crc kubenswrapper[4888]: I1006 16:25:46.230475 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-zz9pz_bd69d2c4-f9c6-4688-bd53-1ae4cbf36175/cert-manager-cainjector/0.log" Oct 06 16:25:58 crc kubenswrapper[4888]: I1006 16:25:58.442226 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-vr9q4_a8ba4133-701b-48a4-a066-5ef4e96ca10d/nmstate-console-plugin/0.log" Oct 06 16:25:58 crc kubenswrapper[4888]: I1006 16:25:58.676703 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-f7sj6_559ac274-e135-4305-8659-c0d3d3e0a832/nmstate-metrics/0.log" Oct 06 16:25:58 crc kubenswrapper[4888]: I1006 16:25:58.696924 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8j27m_f5de385b-9a6a-4dcd-819b-35500a617b29/nmstate-handler/0.log" Oct 06 16:25:58 crc kubenswrapper[4888]: I1006 16:25:58.744035 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-f7sj6_559ac274-e135-4305-8659-c0d3d3e0a832/kube-rbac-proxy/0.log" Oct 06 16:25:58 crc kubenswrapper[4888]: I1006 16:25:58.922374 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:25:58 crc kubenswrapper[4888]: E1006 16:25:58.922573 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:25:59 crc kubenswrapper[4888]: I1006 16:25:59.141164 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-pv582_c8df3027-9c1a-48ab-bfd6-5a6cb0753f95/nmstate-webhook/0.log" Oct 06 16:25:59 crc kubenswrapper[4888]: I1006 16:25:59.207155 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-twx77_6156a5b3-4323-433b-bde7-ad0bebee7659/nmstate-operator/0.log" Oct 06 16:26:12 crc kubenswrapper[4888]: I1006 16:26:12.921778 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:26:12 crc kubenswrapper[4888]: E1006 16:26:12.922658 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:26:14 crc kubenswrapper[4888]: I1006 16:26:14.500133 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-z57b5_a077b76d-9a5e-482a-8c03-efd3a93f1c62/kube-rbac-proxy/0.log" Oct 06 16:26:14 crc kubenswrapper[4888]: I1006 16:26:14.564849 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-z57b5_a077b76d-9a5e-482a-8c03-efd3a93f1c62/controller/0.log" Oct 06 16:26:14 crc kubenswrapper[4888]: I1006 16:26:14.722635 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-frr-files/0.log" Oct 06 16:26:14 crc kubenswrapper[4888]: I1006 16:26:14.896495 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-frr-files/0.log" Oct 06 16:26:14 crc kubenswrapper[4888]: I1006 16:26:14.912790 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-metrics/0.log" Oct 06 16:26:14 crc kubenswrapper[4888]: I1006 16:26:14.944644 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-reloader/0.log" Oct 06 16:26:14 crc kubenswrapper[4888]: I1006 16:26:14.958039 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-reloader/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.174785 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-metrics/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.199486 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-reloader/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.215357 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-frr-files/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.223492 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-metrics/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.362215 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-frr-files/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.363353 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-reloader/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.401822 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/controller/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.411589 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/cp-metrics/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.576576 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/frr-metrics/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.617646 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/kube-rbac-proxy/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.657816 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/kube-rbac-proxy-frr/0.log" Oct 06 16:26:15 crc kubenswrapper[4888]: I1006 16:26:15.862290 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/reloader/0.log" Oct 06 16:26:16 crc kubenswrapper[4888]: I1006 16:26:16.018362 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-t22dc_3a700faf-c900-4a48-814b-568b4eb5b60c/frr-k8s-webhook-server/0.log" Oct 06 16:26:16 crc kubenswrapper[4888]: I1006 16:26:16.297986 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8599c478db-fs7c6_5f42e180-91fb-42d0-bf64-66c83c63001b/manager/0.log" Oct 06 16:26:16 crc kubenswrapper[4888]: I1006 16:26:16.528840 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m9bpz_e8af0a6f-bf6e-4822-827f-6e40bf4c9f15/frr/0.log" Oct 06 16:26:16 crc kubenswrapper[4888]: I1006 16:26:16.894458 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xd85f_12baee70-26bd-4484-92ec-a74a01b41356/kube-rbac-proxy/0.log" Oct 06 16:26:16 crc kubenswrapper[4888]: I1006 16:26:16.985876 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8bb6f8b6-sfr26_ce47b00a-e949-469d-bb0c-618a40600f55/webhook-server/0.log" Oct 06 16:26:17 crc kubenswrapper[4888]: I1006 16:26:17.292543 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-xd85f_12baee70-26bd-4484-92ec-a74a01b41356/speaker/0.log" Oct 06 16:26:23 crc kubenswrapper[4888]: I1006 16:26:23.922172 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:26:23 crc kubenswrapper[4888]: E1006 16:26:23.923380 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:26:30 crc kubenswrapper[4888]: I1006 16:26:30.676612 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr_613a43a7-c01b-481f-bd3f-387d90df61ec/util/0.log" Oct 06 16:26:30 crc kubenswrapper[4888]: I1006 16:26:30.800508 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr_613a43a7-c01b-481f-bd3f-387d90df61ec/util/0.log" Oct 06 16:26:30 crc kubenswrapper[4888]: I1006 16:26:30.842771 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr_613a43a7-c01b-481f-bd3f-387d90df61ec/pull/0.log" Oct 06 16:26:30 crc kubenswrapper[4888]: I1006 16:26:30.880273 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr_613a43a7-c01b-481f-bd3f-387d90df61ec/pull/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.013660 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr_613a43a7-c01b-481f-bd3f-387d90df61ec/util/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.064090 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr_613a43a7-c01b-481f-bd3f-387d90df61ec/pull/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.072593 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d27k8fr_613a43a7-c01b-481f-bd3f-387d90df61ec/extract/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.200665 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j4b2n_874bd15d-af29-4293-a17d-27c424806052/extract-utilities/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.423677 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j4b2n_874bd15d-af29-4293-a17d-27c424806052/extract-content/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.430865 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j4b2n_874bd15d-af29-4293-a17d-27c424806052/extract-utilities/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.437026 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j4b2n_874bd15d-af29-4293-a17d-27c424806052/extract-content/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.541555 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j4b2n_874bd15d-af29-4293-a17d-27c424806052/extract-utilities/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.608743 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j4b2n_874bd15d-af29-4293-a17d-27c424806052/extract-content/0.log" Oct 06 16:26:31 crc kubenswrapper[4888]: I1006 16:26:31.773854 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hlr5g_02095982-7d9a-470d-a5e4-ddec41a38a36/extract-utilities/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.076985 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hlr5g_02095982-7d9a-470d-a5e4-ddec41a38a36/extract-content/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.137517 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hlr5g_02095982-7d9a-470d-a5e4-ddec41a38a36/extract-utilities/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.216892 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hlr5g_02095982-7d9a-470d-a5e4-ddec41a38a36/extract-content/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.241663 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j4b2n_874bd15d-af29-4293-a17d-27c424806052/registry-server/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.363149 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hlr5g_02095982-7d9a-470d-a5e4-ddec41a38a36/extract-content/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.387538 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hlr5g_02095982-7d9a-470d-a5e4-ddec41a38a36/extract-utilities/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.643381 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q_27fd59b4-ff18-477e-b242-b7b60574de55/util/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.963275 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q_27fd59b4-ff18-477e-b242-b7b60574de55/util/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.980842 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q_27fd59b4-ff18-477e-b242-b7b60574de55/pull/0.log" Oct 06 16:26:32 crc kubenswrapper[4888]: I1006 16:26:32.996828 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q_27fd59b4-ff18-477e-b242-b7b60574de55/pull/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.094087 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hlr5g_02095982-7d9a-470d-a5e4-ddec41a38a36/registry-server/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.249290 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q_27fd59b4-ff18-477e-b242-b7b60574de55/util/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.279877 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q_27fd59b4-ff18-477e-b242-b7b60574de55/pull/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.316087 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835czvk2q_27fd59b4-ff18-477e-b242-b7b60574de55/extract/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.511168 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fsj8b_7c70fbc7-53ed-416b-a3f3-754b465569d7/marketplace-operator/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.623398 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pxpqx_82adbb31-e074-49d1-97fa-123bde18a0b8/extract-utilities/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.795184 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pxpqx_82adbb31-e074-49d1-97fa-123bde18a0b8/extract-utilities/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.825024 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pxpqx_82adbb31-e074-49d1-97fa-123bde18a0b8/extract-content/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.833472 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pxpqx_82adbb31-e074-49d1-97fa-123bde18a0b8/extract-content/0.log" Oct 06 16:26:33 crc kubenswrapper[4888]: I1006 16:26:33.966050 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pxpqx_82adbb31-e074-49d1-97fa-123bde18a0b8/extract-utilities/0.log" Oct 06 16:26:34 crc kubenswrapper[4888]: I1006 16:26:34.026725 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pxpqx_82adbb31-e074-49d1-97fa-123bde18a0b8/extract-content/0.log" Oct 06 16:26:34 crc kubenswrapper[4888]: I1006 16:26:34.141261 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pxpqx_82adbb31-e074-49d1-97fa-123bde18a0b8/registry-server/0.log" Oct 06 16:26:34 crc kubenswrapper[4888]: I1006 16:26:34.243131 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-26d4p_05a6df39-9337-4af7-84cb-bdf2ab5fa1e0/extract-utilities/0.log" Oct 06 16:26:34 crc kubenswrapper[4888]: I1006 16:26:34.354060 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-26d4p_05a6df39-9337-4af7-84cb-bdf2ab5fa1e0/extract-utilities/0.log" Oct 06 16:26:34 crc kubenswrapper[4888]: I1006 16:26:34.394293 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-26d4p_05a6df39-9337-4af7-84cb-bdf2ab5fa1e0/extract-content/0.log" Oct 06 16:26:34 crc kubenswrapper[4888]: I1006 16:26:34.399204 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-26d4p_05a6df39-9337-4af7-84cb-bdf2ab5fa1e0/extract-content/0.log" Oct 06 16:26:34 crc kubenswrapper[4888]: I1006 16:26:34.566383 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-26d4p_05a6df39-9337-4af7-84cb-bdf2ab5fa1e0/extract-utilities/0.log" Oct 06 16:26:34 crc kubenswrapper[4888]: I1006 16:26:34.613644 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-26d4p_05a6df39-9337-4af7-84cb-bdf2ab5fa1e0/extract-content/0.log" Oct 06 16:26:35 crc kubenswrapper[4888]: I1006 16:26:35.114200 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-26d4p_05a6df39-9337-4af7-84cb-bdf2ab5fa1e0/registry-server/0.log" Oct 06 16:26:37 crc kubenswrapper[4888]: I1006 16:26:37.922070 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:26:37 crc kubenswrapper[4888]: E1006 16:26:37.922561 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:26:48 crc kubenswrapper[4888]: I1006 16:26:48.922100 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:26:48 crc kubenswrapper[4888]: E1006 16:26:48.922980 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:27:00 crc kubenswrapper[4888]: I1006 16:27:00.926745 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:27:00 crc kubenswrapper[4888]: E1006 16:27:00.927598 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:27:15 crc kubenswrapper[4888]: I1006 16:27:15.922203 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:27:15 crc kubenswrapper[4888]: E1006 16:27:15.923365 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:27:30 crc kubenswrapper[4888]: I1006 16:27:30.928645 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:27:30 crc kubenswrapper[4888]: E1006 16:27:30.929570 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:27:41 crc kubenswrapper[4888]: I1006 16:27:41.921550 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:27:41 crc kubenswrapper[4888]: E1006 16:27:41.922352 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:27:55 crc kubenswrapper[4888]: I1006 16:27:55.921960 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:27:55 crc kubenswrapper[4888]: E1006 16:27:55.922718 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:28:08 crc kubenswrapper[4888]: I1006 16:28:08.923101 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:28:08 crc kubenswrapper[4888]: E1006 16:28:08.924171 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:28:21 crc kubenswrapper[4888]: I1006 16:28:21.922445 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:28:21 crc kubenswrapper[4888]: E1006 16:28:21.923227 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:28:35 crc kubenswrapper[4888]: I1006 16:28:35.921217 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:28:35 crc kubenswrapper[4888]: E1006 16:28:35.922121 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:28:37 crc kubenswrapper[4888]: I1006 16:28:37.837354 4888 generic.go:334] "Generic (PLEG): container finished" podID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerID="7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886" exitCode=0 Oct 06 16:28:37 crc kubenswrapper[4888]: I1006 16:28:37.837440 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" event={"ID":"645d487f-92ae-43a8-a9f2-768977b9eea7","Type":"ContainerDied","Data":"7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886"} Oct 06 16:28:37 crc kubenswrapper[4888]: I1006 16:28:37.838431 4888 scope.go:117] "RemoveContainer" containerID="7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886" Oct 06 16:28:38 crc kubenswrapper[4888]: I1006 16:28:38.164891 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhv7l_must-gather-kz7f8_645d487f-92ae-43a8-a9f2-768977b9eea7/gather/0.log" Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.307166 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhv7l/must-gather-kz7f8"] Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.307789 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" podUID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerName="copy" containerID="cri-o://57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149" gracePeriod=2 Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.318506 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhv7l/must-gather-kz7f8"] Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.853051 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhv7l_must-gather-kz7f8_645d487f-92ae-43a8-a9f2-768977b9eea7/copy/0.log" Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.853718 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.927666 4888 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhv7l_must-gather-kz7f8_645d487f-92ae-43a8-a9f2-768977b9eea7/copy/0.log" Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.928277 4888 generic.go:334] "Generic (PLEG): container finished" podID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerID="57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149" exitCode=143 Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.928336 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhv7l/must-gather-kz7f8" Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.936784 4888 scope.go:117] "RemoveContainer" containerID="57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149" Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.969009 4888 scope.go:117] "RemoveContainer" containerID="7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886" Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.988636 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/645d487f-92ae-43a8-a9f2-768977b9eea7-must-gather-output\") pod \"645d487f-92ae-43a8-a9f2-768977b9eea7\" (UID: \"645d487f-92ae-43a8-a9f2-768977b9eea7\") " Oct 06 16:28:46 crc kubenswrapper[4888]: I1006 16:28:46.988921 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvjts\" (UniqueName: \"kubernetes.io/projected/645d487f-92ae-43a8-a9f2-768977b9eea7-kube-api-access-gvjts\") pod \"645d487f-92ae-43a8-a9f2-768977b9eea7\" (UID: \"645d487f-92ae-43a8-a9f2-768977b9eea7\") " Oct 06 16:28:47 crc kubenswrapper[4888]: I1006 16:28:47.009225 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645d487f-92ae-43a8-a9f2-768977b9eea7-kube-api-access-gvjts" (OuterVolumeSpecName: "kube-api-access-gvjts") pod "645d487f-92ae-43a8-a9f2-768977b9eea7" (UID: "645d487f-92ae-43a8-a9f2-768977b9eea7"). InnerVolumeSpecName "kube-api-access-gvjts". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:28:47 crc kubenswrapper[4888]: I1006 16:28:47.020157 4888 scope.go:117] "RemoveContainer" containerID="57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149" Oct 06 16:28:47 crc kubenswrapper[4888]: E1006 16:28:47.020776 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149\": container with ID starting with 57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149 not found: ID does not exist" containerID="57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149" Oct 06 16:28:47 crc kubenswrapper[4888]: I1006 16:28:47.020855 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149"} err="failed to get container status \"57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149\": rpc error: code = NotFound desc = could not find container \"57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149\": container with ID starting with 57620b70ea1373ffd61b5649bdb206028ebc142671421ba6251737e1e6b70149 not found: ID does not exist" Oct 06 16:28:47 crc kubenswrapper[4888]: I1006 16:28:47.020876 4888 scope.go:117] "RemoveContainer" containerID="7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886" Oct 06 16:28:47 crc kubenswrapper[4888]: E1006 16:28:47.021154 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886\": container with ID starting with 7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886 not found: ID does not exist" containerID="7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886" Oct 06 16:28:47 crc kubenswrapper[4888]: I1006 16:28:47.021206 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886"} err="failed to get container status \"7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886\": rpc error: code = NotFound desc = could not find container \"7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886\": container with ID starting with 7d9b5575883221722c77d1a8ac7146b4d4a678ab8bd0dd0b919a44b8aa8e6886 not found: ID does not exist" Oct 06 16:28:47 crc kubenswrapper[4888]: I1006 16:28:47.095899 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvjts\" (UniqueName: \"kubernetes.io/projected/645d487f-92ae-43a8-a9f2-768977b9eea7-kube-api-access-gvjts\") on node \"crc\" DevicePath \"\"" Oct 06 16:28:47 crc kubenswrapper[4888]: I1006 16:28:47.180098 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/645d487f-92ae-43a8-a9f2-768977b9eea7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "645d487f-92ae-43a8-a9f2-768977b9eea7" (UID: "645d487f-92ae-43a8-a9f2-768977b9eea7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:28:47 crc kubenswrapper[4888]: I1006 16:28:47.198011 4888 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/645d487f-92ae-43a8-a9f2-768977b9eea7-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 06 16:28:48 crc kubenswrapper[4888]: I1006 16:28:48.939286 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="645d487f-92ae-43a8-a9f2-768977b9eea7" path="/var/lib/kubelet/pods/645d487f-92ae-43a8-a9f2-768977b9eea7/volumes" Oct 06 16:28:50 crc kubenswrapper[4888]: I1006 16:28:50.931893 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:28:50 crc kubenswrapper[4888]: E1006 16:28:50.932666 4888 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-spjkk_openshift-machine-config-operator(a145d9af-9431-4196-bd66-a095e39bf3ca)\"" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" Oct 06 16:29:05 crc kubenswrapper[4888]: I1006 16:29:05.921350 4888 scope.go:117] "RemoveContainer" containerID="c7f38f12ccc1ed791e2c4ee18e730f99b74864091166744cec4561145a2fac3c" Oct 06 16:29:07 crc kubenswrapper[4888]: I1006 16:29:07.140222 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" event={"ID":"a145d9af-9431-4196-bd66-a095e39bf3ca","Type":"ContainerStarted","Data":"89ee752bb1f1acaf0c99f2006ba2006727038ce4faaba63ba097e1a6e7d70c8a"} Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.154662 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h2qw9"] Oct 06 16:29:57 crc kubenswrapper[4888]: E1006 16:29:57.155633 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011689e3-682c-4488-91fd-8b8c863fe845" containerName="container-00" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.155647 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="011689e3-682c-4488-91fd-8b8c863fe845" containerName="container-00" Oct 06 16:29:57 crc kubenswrapper[4888]: E1006 16:29:57.155693 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerName="gather" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.155702 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerName="gather" Oct 06 16:29:57 crc kubenswrapper[4888]: E1006 16:29:57.155724 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerName="copy" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.155733 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerName="copy" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.155968 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerName="gather" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.155990 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="011689e3-682c-4488-91fd-8b8c863fe845" containerName="container-00" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.156001 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="645d487f-92ae-43a8-a9f2-768977b9eea7" containerName="copy" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.161196 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.193312 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h2qw9"] Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.292901 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-catalog-content\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.293217 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6z5\" (UniqueName: \"kubernetes.io/projected/88bb52dd-c90e-4075-a71c-7e7d74d8287a-kube-api-access-gn6z5\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.293490 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-utilities\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.395090 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-utilities\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.395261 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-catalog-content\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.395310 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6z5\" (UniqueName: \"kubernetes.io/projected/88bb52dd-c90e-4075-a71c-7e7d74d8287a-kube-api-access-gn6z5\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.395606 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-utilities\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.395731 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-catalog-content\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.418266 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6z5\" (UniqueName: \"kubernetes.io/projected/88bb52dd-c90e-4075-a71c-7e7d74d8287a-kube-api-access-gn6z5\") pod \"community-operators-h2qw9\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:57 crc kubenswrapper[4888]: I1006 16:29:57.482231 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:29:58 crc kubenswrapper[4888]: I1006 16:29:58.095149 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h2qw9"] Oct 06 16:29:58 crc kubenswrapper[4888]: I1006 16:29:58.756841 4888 generic.go:334] "Generic (PLEG): container finished" podID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerID="6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64" exitCode=0 Oct 06 16:29:58 crc kubenswrapper[4888]: I1006 16:29:58.756948 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2qw9" event={"ID":"88bb52dd-c90e-4075-a71c-7e7d74d8287a","Type":"ContainerDied","Data":"6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64"} Oct 06 16:29:58 crc kubenswrapper[4888]: I1006 16:29:58.757199 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2qw9" event={"ID":"88bb52dd-c90e-4075-a71c-7e7d74d8287a","Type":"ContainerStarted","Data":"ec6d910dc56d19025acfe2eb7b7c3531b6c73184e9512ef01428da47d3c91927"} Oct 06 16:29:58 crc kubenswrapper[4888]: I1006 16:29:58.771025 4888 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.167252 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8"] Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.168789 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.174267 4888 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.184376 4888 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.189841 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8"] Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.257238 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/575de536-ed3e-482a-8704-b45eb68c9fe0-secret-volume\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.257306 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/575de536-ed3e-482a-8704-b45eb68c9fe0-config-volume\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.257333 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k96jj\" (UniqueName: \"kubernetes.io/projected/575de536-ed3e-482a-8704-b45eb68c9fe0-kube-api-access-k96jj\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.358757 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/575de536-ed3e-482a-8704-b45eb68c9fe0-config-volume\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.359041 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k96jj\" (UniqueName: \"kubernetes.io/projected/575de536-ed3e-482a-8704-b45eb68c9fe0-kube-api-access-k96jj\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.359220 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/575de536-ed3e-482a-8704-b45eb68c9fe0-secret-volume\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.359746 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/575de536-ed3e-482a-8704-b45eb68c9fe0-config-volume\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.364318 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/575de536-ed3e-482a-8704-b45eb68c9fe0-secret-volume\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.386922 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k96jj\" (UniqueName: \"kubernetes.io/projected/575de536-ed3e-482a-8704-b45eb68c9fe0-kube-api-access-k96jj\") pod \"collect-profiles-29329470-k2bf8\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.492117 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.774198 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2qw9" event={"ID":"88bb52dd-c90e-4075-a71c-7e7d74d8287a","Type":"ContainerStarted","Data":"6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0"} Oct 06 16:30:00 crc kubenswrapper[4888]: I1006 16:30:00.961451 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8"] Oct 06 16:30:01 crc kubenswrapper[4888]: I1006 16:30:01.783380 4888 generic.go:334] "Generic (PLEG): container finished" podID="575de536-ed3e-482a-8704-b45eb68c9fe0" containerID="8dc621a680e6776a0b61ddd8e1df678d572b163d1fc8b93aacb28ded8d77bc62" exitCode=0 Oct 06 16:30:01 crc kubenswrapper[4888]: I1006 16:30:01.783424 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" event={"ID":"575de536-ed3e-482a-8704-b45eb68c9fe0","Type":"ContainerDied","Data":"8dc621a680e6776a0b61ddd8e1df678d572b163d1fc8b93aacb28ded8d77bc62"} Oct 06 16:30:01 crc kubenswrapper[4888]: I1006 16:30:01.783731 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" event={"ID":"575de536-ed3e-482a-8704-b45eb68c9fe0","Type":"ContainerStarted","Data":"79da9bb1db8f13b62eb77fa906479cad6b93e83f99e84abb7df5325484313d2f"} Oct 06 16:30:01 crc kubenswrapper[4888]: I1006 16:30:01.786693 4888 generic.go:334] "Generic (PLEG): container finished" podID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerID="6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0" exitCode=0 Oct 06 16:30:01 crc kubenswrapper[4888]: I1006 16:30:01.786742 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2qw9" event={"ID":"88bb52dd-c90e-4075-a71c-7e7d74d8287a","Type":"ContainerDied","Data":"6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0"} Oct 06 16:30:02 crc kubenswrapper[4888]: I1006 16:30:02.796631 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2qw9" event={"ID":"88bb52dd-c90e-4075-a71c-7e7d74d8287a","Type":"ContainerStarted","Data":"43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca"} Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.185195 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.205641 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h2qw9" podStartSLOduration=2.691882745 podStartE2EDuration="6.205622006s" podCreationTimestamp="2025-10-06 16:29:57 +0000 UTC" firstStartedPulling="2025-10-06 16:29:58.770736225 +0000 UTC m=+5338.583086943" lastFinishedPulling="2025-10-06 16:30:02.284475456 +0000 UTC m=+5342.096826204" observedRunningTime="2025-10-06 16:30:02.816200515 +0000 UTC m=+5342.628551253" watchObservedRunningTime="2025-10-06 16:30:03.205622006 +0000 UTC m=+5343.017972724" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.316660 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/575de536-ed3e-482a-8704-b45eb68c9fe0-config-volume\") pod \"575de536-ed3e-482a-8704-b45eb68c9fe0\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.316833 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/575de536-ed3e-482a-8704-b45eb68c9fe0-secret-volume\") pod \"575de536-ed3e-482a-8704-b45eb68c9fe0\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.316967 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k96jj\" (UniqueName: \"kubernetes.io/projected/575de536-ed3e-482a-8704-b45eb68c9fe0-kube-api-access-k96jj\") pod \"575de536-ed3e-482a-8704-b45eb68c9fe0\" (UID: \"575de536-ed3e-482a-8704-b45eb68c9fe0\") " Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.317454 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/575de536-ed3e-482a-8704-b45eb68c9fe0-config-volume" (OuterVolumeSpecName: "config-volume") pod "575de536-ed3e-482a-8704-b45eb68c9fe0" (UID: "575de536-ed3e-482a-8704-b45eb68c9fe0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.322356 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575de536-ed3e-482a-8704-b45eb68c9fe0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "575de536-ed3e-482a-8704-b45eb68c9fe0" (UID: "575de536-ed3e-482a-8704-b45eb68c9fe0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.324521 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575de536-ed3e-482a-8704-b45eb68c9fe0-kube-api-access-k96jj" (OuterVolumeSpecName: "kube-api-access-k96jj") pod "575de536-ed3e-482a-8704-b45eb68c9fe0" (UID: "575de536-ed3e-482a-8704-b45eb68c9fe0"). InnerVolumeSpecName "kube-api-access-k96jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.419968 4888 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/575de536-ed3e-482a-8704-b45eb68c9fe0-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.420000 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k96jj\" (UniqueName: \"kubernetes.io/projected/575de536-ed3e-482a-8704-b45eb68c9fe0-kube-api-access-k96jj\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.420010 4888 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/575de536-ed3e-482a-8704-b45eb68c9fe0-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.807603 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" event={"ID":"575de536-ed3e-482a-8704-b45eb68c9fe0","Type":"ContainerDied","Data":"79da9bb1db8f13b62eb77fa906479cad6b93e83f99e84abb7df5325484313d2f"} Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.807852 4888 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79da9bb1db8f13b62eb77fa906479cad6b93e83f99e84abb7df5325484313d2f" Oct 06 16:30:03 crc kubenswrapper[4888]: I1006 16:30:03.807651 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329470-k2bf8" Oct 06 16:30:04 crc kubenswrapper[4888]: I1006 16:30:04.269333 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv"] Oct 06 16:30:04 crc kubenswrapper[4888]: I1006 16:30:04.275389 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329425-8zwcv"] Oct 06 16:30:04 crc kubenswrapper[4888]: I1006 16:30:04.936199 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fcd42d0-d922-4a79-91e3-3d86c91def6f" path="/var/lib/kubelet/pods/9fcd42d0-d922-4a79-91e3-3d86c91def6f/volumes" Oct 06 16:30:07 crc kubenswrapper[4888]: I1006 16:30:07.483355 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:30:07 crc kubenswrapper[4888]: I1006 16:30:07.483851 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:30:07 crc kubenswrapper[4888]: I1006 16:30:07.539654 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:30:07 crc kubenswrapper[4888]: I1006 16:30:07.920065 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:30:07 crc kubenswrapper[4888]: I1006 16:30:07.985366 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h2qw9"] Oct 06 16:30:09 crc kubenswrapper[4888]: I1006 16:30:09.880895 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h2qw9" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerName="registry-server" containerID="cri-o://43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca" gracePeriod=2 Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.388659 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.468443 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-utilities\") pod \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.468565 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn6z5\" (UniqueName: \"kubernetes.io/projected/88bb52dd-c90e-4075-a71c-7e7d74d8287a-kube-api-access-gn6z5\") pod \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.468602 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-catalog-content\") pod \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\" (UID: \"88bb52dd-c90e-4075-a71c-7e7d74d8287a\") " Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.471695 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-utilities" (OuterVolumeSpecName: "utilities") pod "88bb52dd-c90e-4075-a71c-7e7d74d8287a" (UID: "88bb52dd-c90e-4075-a71c-7e7d74d8287a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.475012 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88bb52dd-c90e-4075-a71c-7e7d74d8287a-kube-api-access-gn6z5" (OuterVolumeSpecName: "kube-api-access-gn6z5") pod "88bb52dd-c90e-4075-a71c-7e7d74d8287a" (UID: "88bb52dd-c90e-4075-a71c-7e7d74d8287a"). InnerVolumeSpecName "kube-api-access-gn6z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.529274 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88bb52dd-c90e-4075-a71c-7e7d74d8287a" (UID: "88bb52dd-c90e-4075-a71c-7e7d74d8287a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.570161 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn6z5\" (UniqueName: \"kubernetes.io/projected/88bb52dd-c90e-4075-a71c-7e7d74d8287a-kube-api-access-gn6z5\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.570198 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.570211 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88bb52dd-c90e-4075-a71c-7e7d74d8287a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.890906 4888 generic.go:334] "Generic (PLEG): container finished" podID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerID="43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca" exitCode=0 Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.891277 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2qw9" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.891292 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2qw9" event={"ID":"88bb52dd-c90e-4075-a71c-7e7d74d8287a","Type":"ContainerDied","Data":"43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca"} Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.892240 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2qw9" event={"ID":"88bb52dd-c90e-4075-a71c-7e7d74d8287a","Type":"ContainerDied","Data":"ec6d910dc56d19025acfe2eb7b7c3531b6c73184e9512ef01428da47d3c91927"} Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.892265 4888 scope.go:117] "RemoveContainer" containerID="43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.924781 4888 scope.go:117] "RemoveContainer" containerID="6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.942712 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h2qw9"] Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.942934 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h2qw9"] Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.955780 4888 scope.go:117] "RemoveContainer" containerID="6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.993968 4888 scope.go:117] "RemoveContainer" containerID="43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca" Oct 06 16:30:10 crc kubenswrapper[4888]: E1006 16:30:10.994946 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca\": container with ID starting with 43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca not found: ID does not exist" containerID="43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.994995 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca"} err="failed to get container status \"43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca\": rpc error: code = NotFound desc = could not find container \"43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca\": container with ID starting with 43eea4da17c3dc2edb25b225da251102a95d88afa3cbc859542144803fe890ca not found: ID does not exist" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.995031 4888 scope.go:117] "RemoveContainer" containerID="6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0" Oct 06 16:30:10 crc kubenswrapper[4888]: E1006 16:30:10.996758 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0\": container with ID starting with 6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0 not found: ID does not exist" containerID="6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.996825 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0"} err="failed to get container status \"6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0\": rpc error: code = NotFound desc = could not find container \"6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0\": container with ID starting with 6d595c6dc447da5fafe5d352185475f72d976f2ddc887de98a6fd9be85e9d6f0 not found: ID does not exist" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.996855 4888 scope.go:117] "RemoveContainer" containerID="6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64" Oct 06 16:30:10 crc kubenswrapper[4888]: E1006 16:30:10.997223 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64\": container with ID starting with 6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64 not found: ID does not exist" containerID="6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64" Oct 06 16:30:10 crc kubenswrapper[4888]: I1006 16:30:10.997251 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64"} err="failed to get container status \"6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64\": rpc error: code = NotFound desc = could not find container \"6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64\": container with ID starting with 6a846560c367952a05afba27196ee235c2fab2e65d5a9eb41c5dcb14854b0f64 not found: ID does not exist" Oct 06 16:30:12 crc kubenswrapper[4888]: I1006 16:30:12.984563 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" path="/var/lib/kubelet/pods/88bb52dd-c90e-4075-a71c-7e7d74d8287a/volumes" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.599833 4888 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vxvks"] Oct 06 16:30:13 crc kubenswrapper[4888]: E1006 16:30:13.600622 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerName="extract-utilities" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.600651 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerName="extract-utilities" Oct 06 16:30:13 crc kubenswrapper[4888]: E1006 16:30:13.600684 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerName="extract-content" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.600697 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerName="extract-content" Oct 06 16:30:13 crc kubenswrapper[4888]: E1006 16:30:13.600716 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerName="registry-server" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.600729 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerName="registry-server" Oct 06 16:30:13 crc kubenswrapper[4888]: E1006 16:30:13.600758 4888 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575de536-ed3e-482a-8704-b45eb68c9fe0" containerName="collect-profiles" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.600770 4888 state_mem.go:107] "Deleted CPUSet assignment" podUID="575de536-ed3e-482a-8704-b45eb68c9fe0" containerName="collect-profiles" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.603165 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="575de536-ed3e-482a-8704-b45eb68c9fe0" containerName="collect-profiles" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.603318 4888 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bb52dd-c90e-4075-a71c-7e7d74d8287a" containerName="registry-server" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.606361 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.632122 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vxvks"] Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.642348 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-catalog-content\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.642486 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-utilities\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.642532 4888 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmh8\" (UniqueName: \"kubernetes.io/projected/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-kube-api-access-bpmh8\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.744007 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-catalog-content\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.744075 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-utilities\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.744124 4888 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmh8\" (UniqueName: \"kubernetes.io/projected/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-kube-api-access-bpmh8\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.744884 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-catalog-content\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.745226 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-utilities\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.765723 4888 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmh8\" (UniqueName: \"kubernetes.io/projected/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-kube-api-access-bpmh8\") pod \"redhat-operators-vxvks\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:13 crc kubenswrapper[4888]: I1006 16:30:13.937210 4888 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:14 crc kubenswrapper[4888]: I1006 16:30:14.334566 4888 scope.go:117] "RemoveContainer" containerID="08d701dd48d26b82a047684d8fddef8e2aaa066e72b23f5b64fcfecf80edd07c" Oct 06 16:30:14 crc kubenswrapper[4888]: I1006 16:30:14.356352 4888 scope.go:117] "RemoveContainer" containerID="bb687567ac180cb3371b1768891468427d0bd19b5810e2c71020a6974e92263e" Oct 06 16:30:14 crc kubenswrapper[4888]: I1006 16:30:14.442502 4888 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vxvks"] Oct 06 16:30:14 crc kubenswrapper[4888]: W1006 16:30:14.454232 4888 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73e6f19a_a220_4e2b_ae36_dc8d72b1df81.slice/crio-ff2b5458367b0b8bfb8d7d09dfda272844de2907a2a0a32fac9b3a6bbeba55da WatchSource:0}: Error finding container ff2b5458367b0b8bfb8d7d09dfda272844de2907a2a0a32fac9b3a6bbeba55da: Status 404 returned error can't find the container with id ff2b5458367b0b8bfb8d7d09dfda272844de2907a2a0a32fac9b3a6bbeba55da Oct 06 16:30:14 crc kubenswrapper[4888]: I1006 16:30:14.968238 4888 generic.go:334] "Generic (PLEG): container finished" podID="73e6f19a-a220-4e2b-ae36-dc8d72b1df81" containerID="68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d" exitCode=0 Oct 06 16:30:14 crc kubenswrapper[4888]: I1006 16:30:14.968538 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxvks" event={"ID":"73e6f19a-a220-4e2b-ae36-dc8d72b1df81","Type":"ContainerDied","Data":"68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d"} Oct 06 16:30:14 crc kubenswrapper[4888]: I1006 16:30:14.968561 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxvks" event={"ID":"73e6f19a-a220-4e2b-ae36-dc8d72b1df81","Type":"ContainerStarted","Data":"ff2b5458367b0b8bfb8d7d09dfda272844de2907a2a0a32fac9b3a6bbeba55da"} Oct 06 16:30:15 crc kubenswrapper[4888]: I1006 16:30:15.979838 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxvks" event={"ID":"73e6f19a-a220-4e2b-ae36-dc8d72b1df81","Type":"ContainerStarted","Data":"6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf"} Oct 06 16:30:19 crc kubenswrapper[4888]: I1006 16:30:19.009669 4888 generic.go:334] "Generic (PLEG): container finished" podID="73e6f19a-a220-4e2b-ae36-dc8d72b1df81" containerID="6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf" exitCode=0 Oct 06 16:30:19 crc kubenswrapper[4888]: I1006 16:30:19.009770 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxvks" event={"ID":"73e6f19a-a220-4e2b-ae36-dc8d72b1df81","Type":"ContainerDied","Data":"6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf"} Oct 06 16:30:20 crc kubenswrapper[4888]: I1006 16:30:20.022102 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxvks" event={"ID":"73e6f19a-a220-4e2b-ae36-dc8d72b1df81","Type":"ContainerStarted","Data":"77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2"} Oct 06 16:30:20 crc kubenswrapper[4888]: I1006 16:30:20.045606 4888 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vxvks" podStartSLOduration=2.251806481 podStartE2EDuration="7.045589658s" podCreationTimestamp="2025-10-06 16:30:13 +0000 UTC" firstStartedPulling="2025-10-06 16:30:14.970577002 +0000 UTC m=+5354.782927720" lastFinishedPulling="2025-10-06 16:30:19.764360169 +0000 UTC m=+5359.576710897" observedRunningTime="2025-10-06 16:30:20.043335897 +0000 UTC m=+5359.855686655" watchObservedRunningTime="2025-10-06 16:30:20.045589658 +0000 UTC m=+5359.857940376" Oct 06 16:30:23 crc kubenswrapper[4888]: I1006 16:30:23.939425 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:23 crc kubenswrapper[4888]: I1006 16:30:23.939857 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:24 crc kubenswrapper[4888]: I1006 16:30:24.996996 4888 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vxvks" podUID="73e6f19a-a220-4e2b-ae36-dc8d72b1df81" containerName="registry-server" probeResult="failure" output=< Oct 06 16:30:24 crc kubenswrapper[4888]: timeout: failed to connect service ":50051" within 1s Oct 06 16:30:24 crc kubenswrapper[4888]: > Oct 06 16:30:33 crc kubenswrapper[4888]: I1006 16:30:33.992125 4888 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:34 crc kubenswrapper[4888]: I1006 16:30:34.069783 4888 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:34 crc kubenswrapper[4888]: I1006 16:30:34.231706 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vxvks"] Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.179933 4888 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vxvks" podUID="73e6f19a-a220-4e2b-ae36-dc8d72b1df81" containerName="registry-server" containerID="cri-o://77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2" gracePeriod=2 Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.613627 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.774170 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-catalog-content\") pod \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.774389 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpmh8\" (UniqueName: \"kubernetes.io/projected/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-kube-api-access-bpmh8\") pod \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.774476 4888 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-utilities\") pod \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\" (UID: \"73e6f19a-a220-4e2b-ae36-dc8d72b1df81\") " Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.775652 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-utilities" (OuterVolumeSpecName: "utilities") pod "73e6f19a-a220-4e2b-ae36-dc8d72b1df81" (UID: "73e6f19a-a220-4e2b-ae36-dc8d72b1df81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.784990 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-kube-api-access-bpmh8" (OuterVolumeSpecName: "kube-api-access-bpmh8") pod "73e6f19a-a220-4e2b-ae36-dc8d72b1df81" (UID: "73e6f19a-a220-4e2b-ae36-dc8d72b1df81"). InnerVolumeSpecName "kube-api-access-bpmh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.853381 4888 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73e6f19a-a220-4e2b-ae36-dc8d72b1df81" (UID: "73e6f19a-a220-4e2b-ae36-dc8d72b1df81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.877159 4888 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpmh8\" (UniqueName: \"kubernetes.io/projected/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-kube-api-access-bpmh8\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.877207 4888 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:35 crc kubenswrapper[4888]: I1006 16:30:35.877221 4888 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e6f19a-a220-4e2b-ae36-dc8d72b1df81-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.194752 4888 generic.go:334] "Generic (PLEG): container finished" podID="73e6f19a-a220-4e2b-ae36-dc8d72b1df81" containerID="77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2" exitCode=0 Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.194845 4888 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxvks" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.194847 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxvks" event={"ID":"73e6f19a-a220-4e2b-ae36-dc8d72b1df81","Type":"ContainerDied","Data":"77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2"} Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.195055 4888 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxvks" event={"ID":"73e6f19a-a220-4e2b-ae36-dc8d72b1df81","Type":"ContainerDied","Data":"ff2b5458367b0b8bfb8d7d09dfda272844de2907a2a0a32fac9b3a6bbeba55da"} Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.195078 4888 scope.go:117] "RemoveContainer" containerID="77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.231489 4888 scope.go:117] "RemoveContainer" containerID="6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.260924 4888 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vxvks"] Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.266382 4888 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vxvks"] Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.272437 4888 scope.go:117] "RemoveContainer" containerID="68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.312164 4888 scope.go:117] "RemoveContainer" containerID="77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2" Oct 06 16:30:36 crc kubenswrapper[4888]: E1006 16:30:36.312580 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2\": container with ID starting with 77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2 not found: ID does not exist" containerID="77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.312607 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2"} err="failed to get container status \"77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2\": rpc error: code = NotFound desc = could not find container \"77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2\": container with ID starting with 77f30fc8802ca72552dccc74345f980b3ac05267496b39dca757d7dfb7dde4c2 not found: ID does not exist" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.312628 4888 scope.go:117] "RemoveContainer" containerID="6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf" Oct 06 16:30:36 crc kubenswrapper[4888]: E1006 16:30:36.313027 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf\": container with ID starting with 6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf not found: ID does not exist" containerID="6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.313074 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf"} err="failed to get container status \"6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf\": rpc error: code = NotFound desc = could not find container \"6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf\": container with ID starting with 6a53e43e5403254ee8b6a21e9cbd9902b93fa53c1c2eec7172920e7ed56a5dcf not found: ID does not exist" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.313105 4888 scope.go:117] "RemoveContainer" containerID="68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d" Oct 06 16:30:36 crc kubenswrapper[4888]: E1006 16:30:36.313362 4888 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d\": container with ID starting with 68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d not found: ID does not exist" containerID="68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.313394 4888 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d"} err="failed to get container status \"68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d\": rpc error: code = NotFound desc = could not find container \"68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d\": container with ID starting with 68d2c004921ceacb25685f4f496b648496c666cb6f8162680aeb199e034c376d not found: ID does not exist" Oct 06 16:30:36 crc kubenswrapper[4888]: I1006 16:30:36.938368 4888 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e6f19a-a220-4e2b-ae36-dc8d72b1df81" path="/var/lib/kubelet/pods/73e6f19a-a220-4e2b-ae36-dc8d72b1df81/volumes" Oct 06 16:31:14 crc kubenswrapper[4888]: I1006 16:31:14.471380 4888 scope.go:117] "RemoveContainer" containerID="6d01f970fa4dfd285735ba78c779d899fecf218170fde357d79434601de08630" Oct 06 16:31:14 crc kubenswrapper[4888]: I1006 16:31:14.510749 4888 scope.go:117] "RemoveContainer" containerID="982304d7eec30a896f2cb265c1ce4e6d3b0cc294be113861e3c985f47427c187" Oct 06 16:31:32 crc kubenswrapper[4888]: I1006 16:31:32.563610 4888 patch_prober.go:28] interesting pod/machine-config-daemon-spjkk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 16:31:32 crc kubenswrapper[4888]: I1006 16:31:32.564903 4888 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-spjkk" podUID="a145d9af-9431-4196-bd66-a095e39bf3ca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"